Test Report: Hyper-V_Windows 19046

                    
                      fb148a11d8032b35b0d9cd6893af3c5921ed4428:2024-06-10:34835
                    
                

Test fail (16/198)

x
+
TestAddons/parallel/Registry (89.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 21.9791ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-gbrng" [693833b2-422d-4e9c-be76-dc11a1a5e30e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.029573s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bltds" [17b25812-54a8-4632-a65b-adf666c9677a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0244835s
addons_test.go:342: (dbg) Run:  kubectl --context addons-987700 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-987700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-987700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (17.0020468s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 ip: (2.8004119s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0610 10:30:11.225068    7208 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-987700 ip"
2024/06/10 10:30:13 [DEBUG] GET http://172.17.154.55:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 addons disable registry --alsologtostderr -v=1: (17.8354173s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-987700 -n addons-987700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-987700 -n addons-987700: (13.7806699s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 logs -n 25: (9.8152386s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-841800 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | -p download-only-841800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-841800                                                                     | download-only-841800 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| start   | -o=json --download-only                                                                     | download-only-289600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | -p download-only-289600                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-289600                                                                     | download-only-289600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-841800                                                                     | download-only-841800 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-289600                                                                     | download-only-289600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-282000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | binary-mirror-282000                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:60263                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-282000                                                                     | binary-mirror-282000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:22 UTC | 10 Jun 24 10:22 UTC |
	| addons  | disable dashboard -p                                                                        | addons-987700        | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:22 UTC |                     |
	|         | addons-987700                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-987700        | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:22 UTC |                     |
	|         | addons-987700                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-987700 --wait=true                                                                | addons-987700        | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:22 UTC | 10 Jun 24 10:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| ip      | addons-987700 ip                                                                            | addons-987700        | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:30 UTC | 10 Jun 24 10:30 UTC |
	| ssh     | addons-987700 ssh cat                                                                       | addons-987700        | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:30 UTC | 10 Jun 24 10:30 UTC |
	|         | /opt/local-path-provisioner/pvc-160300bd-a1e2-4e63-bf32-0e1d8c304ff0_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | addons-987700 addons disable                                                                | addons-987700        | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:30 UTC | 10 Jun 24 10:30 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-987700 addons disable                                                                | addons-987700        | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:30 UTC | 10 Jun 24 10:30 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | addons-987700 addons disable                                                                | addons-987700        | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:30 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:22:02
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:22:02.823711    7424 out.go:291] Setting OutFile to fd 888 ...
	I0610 10:22:02.824554    7424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:22:02.824554    7424 out.go:304] Setting ErrFile to fd 900...
	I0610 10:22:02.824641    7424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:22:02.848700    7424 out.go:298] Setting JSON to false
	I0610 10:22:02.852036    7424 start.go:129] hostinfo: {"hostname":"minikube6","uptime":14811,"bootTime":1718000111,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 10:22:02.852036    7424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 10:22:02.858904    7424 out.go:177] * [addons-987700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 10:22:02.862925    7424 notify.go:220] Checking for updates...
	I0610 10:22:02.866980    7424 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:22:02.869547    7424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:22:02.872322    7424 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 10:22:02.875380    7424 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:22:02.878233    7424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:22:02.881865    7424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:22:08.743036    7424 out.go:177] * Using the hyperv driver based on user configuration
	I0610 10:22:08.746404    7424 start.go:297] selected driver: hyperv
	I0610 10:22:08.746404    7424 start.go:901] validating driver "hyperv" against <nil>
	I0610 10:22:08.746954    7424 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:22:08.797527    7424 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 10:22:08.798591    7424 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:22:08.798591    7424 cni.go:84] Creating CNI manager for ""
	I0610 10:22:08.798591    7424 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:22:08.798591    7424 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:22:08.798591    7424 start.go:340] cluster config:
	{Name:addons-987700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-987700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:22:08.799566    7424 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:22:08.808517    7424 out.go:177] * Starting "addons-987700" primary control-plane node in "addons-987700" cluster
	I0610 10:22:08.811531    7424 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 10:22:08.811531    7424 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 10:22:08.811531    7424 cache.go:56] Caching tarball of preloaded images
	I0610 10:22:08.811942    7424 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 10:22:08.811942    7424 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 10:22:08.812666    7424 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\config.json ...
	I0610 10:22:08.813239    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\config.json: {Name:mkff9fdf552144a21024480c20b636c5782c8fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:08.814437    7424 start.go:360] acquireMachinesLock for addons-987700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:22:08.814437    7424 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-987700"
	I0610 10:22:08.814437    7424 start.go:93] Provisioning new machine with config: &{Name:addons-987700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:addons-987700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:22:08.814437    7424 start.go:125] createHost starting for "" (driver="hyperv")
	I0610 10:22:08.819037    7424 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 10:22:08.819699    7424 start.go:159] libmachine.API.Create for "addons-987700" (driver="hyperv")
	I0610 10:22:08.819699    7424 client.go:168] LocalClient.Create starting
	I0610 10:22:08.820403    7424 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 10:22:09.104415    7424 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 10:22:09.826862    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 10:22:12.173048    7424 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 10:22:12.173048    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:12.173436    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 10:22:13.991030    7424 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 10:22:13.991030    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:13.991753    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 10:22:15.503343    7424 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 10:22:15.503343    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:15.504283    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 10:22:19.404513    7424 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 10:22:19.405103    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:19.407457    7424 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 10:22:19.903167    7424 main.go:141] libmachine: Creating SSH key...
	I0610 10:22:20.200613    7424 main.go:141] libmachine: Creating VM...
	I0610 10:22:20.200613    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 10:22:23.218049    7424 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 10:22:23.218049    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:23.218158    7424 main.go:141] libmachine: Using switch "Default Switch"
	I0610 10:22:23.218240    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 10:22:25.034223    7424 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 10:22:25.034223    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:25.034223    7424 main.go:141] libmachine: Creating VHD
	I0610 10:22:25.034373    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 10:22:28.971652    7424 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 34B9E2C8-46FD-4CFE-B625-14FCD7D88E88
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 10:22:28.971740    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:28.971740    7424 main.go:141] libmachine: Writing magic tar header
	I0610 10:22:28.971874    7424 main.go:141] libmachine: Writing SSH key tar header
	I0610 10:22:28.982264    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 10:22:32.295308    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:22:32.295599    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:32.295599    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\disk.vhd' -SizeBytes 20000MB
	I0610 10:22:34.978108    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:22:34.978108    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:34.978108    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-987700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0610 10:22:39.519353    7424 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-987700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 10:22:39.519483    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:39.519553    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-987700 -DynamicMemoryEnabled $false
	I0610 10:22:41.896661    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:22:41.896661    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:41.897520    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-987700 -Count 2
	I0610 10:22:44.159915    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:22:44.159915    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:44.160374    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-987700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\boot2docker.iso'
	I0610 10:22:46.888979    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:22:46.888979    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:46.889972    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-987700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\disk.vhd'
	I0610 10:22:49.818569    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:22:49.818569    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:49.818569    7424 main.go:141] libmachine: Starting VM...
	I0610 10:22:49.818902    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-987700
	I0610 10:22:53.155678    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:22:53.156623    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:53.156656    7424 main.go:141] libmachine: Waiting for host to start...
	I0610 10:22:53.156656    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:22:55.578722    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:22:55.579506    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:55.579571    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:22:58.350845    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:22:58.350845    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:22:59.359244    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:01.705107    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:01.705107    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:01.705107    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:04.419090    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:23:04.419090    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:05.420215    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:07.821312    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:07.821384    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:07.821449    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:10.557213    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:23:10.557213    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:11.567781    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:13.916110    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:13.916190    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:13.916190    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:16.642521    7424 main.go:141] libmachine: [stdout =====>] : 
	I0610 10:23:16.642521    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:17.645397    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:20.040202    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:20.040202    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:20.040202    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:22.801885    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:23:22.802461    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:22.802461    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:25.050560    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:25.050560    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:25.051002    7424 machine.go:94] provisionDockerMachine start ...
	I0610 10:23:25.051060    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:27.400901    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:27.401393    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:27.401393    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:30.108495    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:23:30.108739    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:30.114755    7424 main.go:141] libmachine: Using SSH client type: native
	I0610 10:23:30.126089    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.154.55 22 <nil> <nil>}
	I0610 10:23:30.126089    7424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 10:23:30.255276    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 10:23:30.255343    7424 buildroot.go:166] provisioning hostname "addons-987700"
	I0610 10:23:30.255343    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:32.558842    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:32.560031    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:32.560031    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:35.266428    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:23:35.266428    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:35.272353    7424 main.go:141] libmachine: Using SSH client type: native
	I0610 10:23:35.272880    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.154.55 22 <nil> <nil>}
	I0610 10:23:35.272880    7424 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-987700 && echo "addons-987700" | sudo tee /etc/hostname
	I0610 10:23:35.431633    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-987700
	
	I0610 10:23:35.431769    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:37.742670    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:37.742670    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:37.742844    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:40.446722    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:23:40.447401    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:40.453477    7424 main.go:141] libmachine: Using SSH client type: native
	I0610 10:23:40.454071    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.154.55 22 <nil> <nil>}
	I0610 10:23:40.454224    7424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-987700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-987700/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-987700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:23:40.608300    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:23:40.608300    7424 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 10:23:40.608300    7424 buildroot.go:174] setting up certificates
	I0610 10:23:40.608300    7424 provision.go:84] configureAuth start
	I0610 10:23:40.608300    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:42.927701    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:42.927701    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:42.927701    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:45.610688    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:23:45.610688    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:45.611181    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:47.879237    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:47.880296    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:47.880296    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:50.600111    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:23:50.600111    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:50.600374    7424 provision.go:143] copyHostCerts
	I0610 10:23:50.601087    7424 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 10:23:50.602957    7424 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 10:23:50.603771    7424 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 10:23:50.605084    7424 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-987700 san=[127.0.0.1 172.17.154.55 addons-987700 localhost minikube]
	I0610 10:23:50.861152    7424 provision.go:177] copyRemoteCerts
	I0610 10:23:50.876372    7424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:23:50.876540    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:53.138182    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:53.139189    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:53.139189    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:23:55.817623    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:23:55.817623    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:55.819195    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:23:55.927146    7424 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0505029s)
	I0610 10:23:55.927199    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:23:55.985230    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 10:23:56.029762    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:23:56.080945    7424 provision.go:87] duration metric: took 15.4725188s to configureAuth
	I0610 10:23:56.080945    7424 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:23:56.081941    7424 config.go:182] Loaded profile config "addons-987700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 10:23:56.081941    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:23:58.329052    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:23:58.329052    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:23:58.329446    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:00.999066    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:00.999714    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:01.005654    7424 main.go:141] libmachine: Using SSH client type: native
	I0610 10:24:01.006428    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.154.55 22 <nil> <nil>}
	I0610 10:24:01.006428    7424 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 10:24:01.141523    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 10:24:01.141648    7424 buildroot.go:70] root file system type: tmpfs
	I0610 10:24:01.141850    7424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 10:24:01.141932    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:24:03.382609    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:24:03.382609    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:03.382609    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:06.080495    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:06.080942    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:06.086398    7424 main.go:141] libmachine: Using SSH client type: native
	I0610 10:24:06.087176    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.154.55 22 <nil> <nil>}
	I0610 10:24:06.087176    7424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 10:24:06.256874    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 10:24:06.256940    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:24:08.557006    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:24:08.557006    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:08.557836    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:11.255933    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:11.255933    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:11.260806    7424 main.go:141] libmachine: Using SSH client type: native
	I0610 10:24:11.261506    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.154.55 22 <nil> <nil>}
	I0610 10:24:11.261768    7424 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 10:24:13.485987    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 10:24:13.486062    7424 machine.go:97] duration metric: took 48.4346626s to provisionDockerMachine
	I0610 10:24:13.486138    7424 client.go:171] duration metric: took 2m4.6654166s to LocalClient.Create
	I0610 10:24:13.486138    7424 start.go:167] duration metric: took 2m4.6654166s to libmachine.API.Create "addons-987700"
	I0610 10:24:13.486222    7424 start.go:293] postStartSetup for "addons-987700" (driver="hyperv")
	I0610 10:24:13.486222    7424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:24:13.498203    7424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:24:13.498203    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:24:15.832941    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:24:15.834007    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:15.834007    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:18.480575    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:18.480575    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:18.481907    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:24:18.581159    7424 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0828004s)
	I0610 10:24:18.594286    7424 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:24:18.604166    7424 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:24:18.604307    7424 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 10:24:18.604307    7424 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 10:24:18.605012    7424 start.go:296] duration metric: took 5.1187472s for postStartSetup
	I0610 10:24:18.608944    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:24:20.823310    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:24:20.823399    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:20.823477    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:23.531643    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:23.531643    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:23.531643    7424 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\config.json ...
	I0610 10:24:23.534455    7424 start.go:128] duration metric: took 2m14.7189142s to createHost
	I0610 10:24:23.534555    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:24:25.803265    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:24:25.803265    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:25.803265    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:28.582705    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:28.582705    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:28.589365    7424 main.go:141] libmachine: Using SSH client type: native
	I0610 10:24:28.589898    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.154.55 22 <nil> <nil>}
	I0610 10:24:28.589898    7424 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:24:28.730517    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718015068.737646063
	
	I0610 10:24:28.730517    7424 fix.go:216] guest clock: 1718015068.737646063
	I0610 10:24:28.730517    7424 fix.go:229] Guest: 2024-06-10 10:24:28.737646063 +0000 UTC Remote: 2024-06-10 10:24:23.5345551 +0000 UTC m=+140.882702001 (delta=5.203090963s)
	I0610 10:24:28.730517    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:24:30.992230    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:24:30.992230    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:30.993246    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:33.686989    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:33.686989    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:33.692288    7424 main.go:141] libmachine: Using SSH client type: native
	I0610 10:24:33.692546    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.154.55 22 <nil> <nil>}
	I0610 10:24:33.692546    7424 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718015068
	I0610 10:24:33.849862    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 10:24:28 UTC 2024
	
	I0610 10:24:33.849862    7424 fix.go:236] clock set: Mon Jun 10 10:24:28 UTC 2024
	 (err=<nil>)
	I0610 10:24:33.849948    7424 start.go:83] releasing machines lock for "addons-987700", held for 2m25.034323s
	I0610 10:24:33.850088    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:24:36.152642    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:24:36.152828    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:36.152919    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:38.935233    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:38.935233    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:38.939476    7424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:24:38.939476    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:24:38.950559    7424 ssh_runner.go:195] Run: cat /version.json
	I0610 10:24:38.950559    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:24:41.306465    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:24:41.306635    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:41.306709    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:41.307302    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:24:41.307302    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:41.307302    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:24:44.114816    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:44.114910    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:44.114910    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:24:44.149720    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:24:44.149828    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:24:44.150726    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:24:44.222887    7424 ssh_runner.go:235] Completed: cat /version.json: (5.2722854s)
	I0610 10:24:44.235109    7424 ssh_runner.go:195] Run: systemctl --version
	I0610 10:24:44.298183    7424 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3586637s)
	I0610 10:24:44.312072    7424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:24:44.322042    7424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:24:44.334483    7424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:24:44.363876    7424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 10:24:44.363876    7424 start.go:494] detecting cgroup driver to use...
	I0610 10:24:44.363876    7424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:24:44.410301    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 10:24:44.442137    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 10:24:44.462252    7424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 10:24:44.473743    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 10:24:44.506904    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 10:24:44.542424    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 10:24:44.575868    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 10:24:44.610992    7424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:24:44.645120    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 10:24:44.677391    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 10:24:44.712144    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 10:24:44.746446    7424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:24:44.777921    7424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:24:44.809480    7424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:24:45.000860    7424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 10:24:45.046062    7424 start.go:494] detecting cgroup driver to use...
	I0610 10:24:45.058056    7424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 10:24:45.095447    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:24:45.134096    7424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:24:45.181343    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:24:45.217574    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 10:24:45.255120    7424 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 10:24:45.314217    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 10:24:45.340933    7424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:24:45.385599    7424 ssh_runner.go:195] Run: which cri-dockerd
	I0610 10:24:45.402779    7424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 10:24:45.418971    7424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 10:24:45.465190    7424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 10:24:45.675917    7424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 10:24:45.881698    7424 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 10:24:45.881992    7424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 10:24:45.926445    7424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:24:46.139017    7424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 10:24:48.663253    7424 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5240872s)
	I0610 10:24:48.678520    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 10:24:48.716307    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 10:24:48.752730    7424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 10:24:48.968017    7424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 10:24:49.161413    7424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:24:49.384397    7424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 10:24:49.428570    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 10:24:49.475038    7424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:24:49.693085    7424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 10:24:49.814940    7424 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 10:24:49.831255    7424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 10:24:49.842486    7424 start.go:562] Will wait 60s for crictl version
	I0610 10:24:49.856013    7424 ssh_runner.go:195] Run: which crictl
	I0610 10:24:49.872280    7424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:24:49.931725    7424 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 10:24:49.942311    7424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 10:24:49.981939    7424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 10:24:50.020836    7424 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 10:24:50.020836    7424 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 10:24:50.026089    7424 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 10:24:50.026122    7424 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 10:24:50.026122    7424 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 10:24:50.026122    7424 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 10:24:50.028317    7424 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 10:24:50.028317    7424 ip.go:210] interface addr: 172.17.144.1/20
	I0610 10:24:50.043501    7424 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 10:24:50.049767    7424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:24:50.073190    7424 kubeadm.go:877] updating cluster {Name:addons-987700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:addons-987700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.154.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:24:50.073190    7424 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 10:24:50.083464    7424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 10:24:50.106851    7424 docker.go:685] Got preloaded images: 
	I0610 10:24:50.106851    7424 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0610 10:24:50.118358    7424 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 10:24:50.156370    7424 ssh_runner.go:195] Run: which lz4
	I0610 10:24:50.173365    7424 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 10:24:50.179816    7424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 10:24:50.180015    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0610 10:24:52.719390    7424 docker.go:649] duration metric: took 2.5570131s to copy over tarball
	I0610 10:24:52.737301    7424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 10:24:58.130719    7424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.3933744s)
	I0610 10:24:58.130719    7424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 10:24:58.199524    7424 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 10:24:58.219094    7424 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0610 10:24:58.268587    7424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:24:58.499184    7424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 10:25:04.308259    7424 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.8090281s)
	I0610 10:25:04.317676    7424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 10:25:04.345148    7424 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 10:25:04.345269    7424 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:25:04.345293    7424 kubeadm.go:928] updating node { 172.17.154.55 8443 v1.30.1 docker true true} ...
	I0610 10:25:04.345519    7424 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-987700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.154.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-987700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:25:04.353931    7424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 10:25:04.395927    7424 cni.go:84] Creating CNI manager for ""
	I0610 10:25:04.396023    7424 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:25:04.396023    7424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:25:04.396023    7424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.154.55 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-987700 NodeName:addons-987700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.154.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.154.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:25:04.396023    7424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.154.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-987700"
	  kubeletExtraArgs:
	    node-ip: 172.17.154.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.154.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:25:04.406492    7424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:25:04.425249    7424 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:25:04.437230    7424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 10:25:04.453959    7424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0610 10:25:04.485871    7424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:25:04.516883    7424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0610 10:25:04.559784    7424 ssh_runner.go:195] Run: grep 172.17.154.55	control-plane.minikube.internal$ /etc/hosts
	I0610 10:25:04.564796    7424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.154.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:25:04.598785    7424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:25:04.802825    7424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:25:04.835816    7424 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700 for IP: 172.17.154.55
	I0610 10:25:04.835908    7424 certs.go:194] generating shared ca certs ...
	I0610 10:25:04.835908    7424 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:04.836283    7424 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 10:25:04.908858    7424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0610 10:25:04.908858    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:04.909854    7424 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0610 10:25:04.909854    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:04.910836    7424 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 10:25:05.202414    7424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0610 10:25:05.202414    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:05.203079    7424 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0610 10:25:05.203079    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:05.205214    7424 certs.go:256] generating profile certs ...
	I0610 10:25:05.205214    7424 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.key
	I0610 10:25:05.205214    7424 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt with IP's: []
	I0610 10:25:05.501488    7424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt ...
	I0610 10:25:05.501488    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: {Name:mk40c9b3203efd2d97fbfe8d3ae71ec07c2a504e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:05.503455    7424 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.key ...
	I0610 10:25:05.503455    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.key: {Name:mk66b31b245061496c460088916869b341351cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:05.503899    7424 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.key.0d2858a6
	I0610 10:25:05.504924    7424 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.crt.0d2858a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.154.55]
	I0610 10:25:05.750925    7424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.crt.0d2858a6 ...
	I0610 10:25:05.750925    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.crt.0d2858a6: {Name:mkf944b2a77d169ee7b1a4fad825444da5bc0759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:05.751381    7424 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.key.0d2858a6 ...
	I0610 10:25:05.751381    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.key.0d2858a6: {Name:mk8f4a2e8f71fab384f1418ea111af5fa69bb84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:05.753400    7424 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.crt.0d2858a6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.crt
	I0610 10:25:05.765396    7424 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.key.0d2858a6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.key
	I0610 10:25:05.767395    7424 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\proxy-client.key
	I0610 10:25:05.767395    7424 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\proxy-client.crt with IP's: []
	I0610 10:25:06.411827    7424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\proxy-client.crt ...
	I0610 10:25:06.411827    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\proxy-client.crt: {Name:mke38b49a057dd8443153dcf81bf0dd74d311a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:06.413278    7424 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\proxy-client.key ...
	I0610 10:25:06.413278    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\proxy-client.key: {Name:mk46d5f1b0db88613db9ee2d27e60915350949fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:06.424871    7424 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 10:25:06.425984    7424 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 10:25:06.426206    7424 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 10:25:06.426206    7424 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 10:25:06.427715    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:25:06.472171    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:25:06.512704    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:25:06.568782    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 10:25:06.616802    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0610 10:25:06.667550    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 10:25:06.716361    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:25:06.764304    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 10:25:06.808943    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:25:06.862455    7424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:25:06.911504    7424 ssh_runner.go:195] Run: openssl version
	I0610 10:25:06.928582    7424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:25:06.961537    7424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:25:06.969194    7424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:25:06.985165    7424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:25:07.005408    7424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:25:07.039527    7424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:25:07.048337    7424 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 10:25:07.048337    7424 kubeadm.go:391] StartCluster: {Name:addons-987700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:addons-987700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.154.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:25:07.058228    7424 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 10:25:07.099299    7424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 10:25:07.133955    7424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 10:25:07.171728    7424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 10:25:07.193614    7424 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 10:25:07.193783    7424 kubeadm.go:156] found existing configuration files:
	
	I0610 10:25:07.210952    7424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 10:25:07.225793    7424 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 10:25:07.239090    7424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 10:25:07.274423    7424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 10:25:07.290829    7424 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 10:25:07.302669    7424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 10:25:07.333028    7424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 10:25:07.352429    7424 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 10:25:07.365881    7424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 10:25:07.399292    7424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 10:25:07.420057    7424 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 10:25:07.435139    7424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 10:25:07.453618    7424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 10:25:07.532426    7424 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 10:25:07.533036    7424 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 10:25:07.718026    7424 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 10:25:07.718462    7424 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 10:25:07.718739    7424 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 10:25:08.081208    7424 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 10:25:08.084402    7424 out.go:204]   - Generating certificates and keys ...
	I0610 10:25:08.084615    7424 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 10:25:08.084701    7424 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 10:25:08.369155    7424 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 10:25:09.099708    7424 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 10:25:09.324787    7424 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 10:25:09.411895    7424 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 10:25:09.536624    7424 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 10:25:09.536624    7424 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-987700 localhost] and IPs [172.17.154.55 127.0.0.1 ::1]
	I0610 10:25:09.709409    7424 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 10:25:09.710772    7424 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-987700 localhost] and IPs [172.17.154.55 127.0.0.1 ::1]
	I0610 10:25:09.885964    7424 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 10:25:10.327803    7424 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 10:25:10.488484    7424 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 10:25:10.488693    7424 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 10:25:10.627278    7424 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 10:25:10.944154    7424 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 10:25:11.122412    7424 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 10:25:11.463241    7424 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 10:25:11.708370    7424 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 10:25:11.709518    7424 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 10:25:11.714167    7424 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 10:25:11.720181    7424 out.go:204]   - Booting up control plane ...
	I0610 10:25:11.720601    7424 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 10:25:11.720659    7424 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 10:25:11.721036    7424 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 10:25:11.748869    7424 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 10:25:11.749208    7424 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 10:25:11.749208    7424 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 10:25:11.970579    7424 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 10:25:11.970579    7424 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 10:25:12.972020    7424 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001672832s
	I0610 10:25:12.972247    7424 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 10:25:20.475961    7424 kubeadm.go:309] [api-check] The API server is healthy after 7.503774711s
	I0610 10:25:20.497933    7424 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 10:25:20.525785    7424 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 10:25:20.582793    7424 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 10:25:20.582793    7424 kubeadm.go:309] [mark-control-plane] Marking the node addons-987700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 10:25:20.604474    7424 kubeadm.go:309] [bootstrap-token] Using token: v2rrx1.jdv5ix5bnxru9gjs
	I0610 10:25:20.606885    7424 out.go:204]   - Configuring RBAC rules ...
	I0610 10:25:20.606885    7424 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 10:25:20.617176    7424 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 10:25:20.633917    7424 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 10:25:20.641915    7424 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 10:25:20.649773    7424 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 10:25:20.656661    7424 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 10:25:20.902320    7424 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 10:25:21.379907    7424 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 10:25:21.892670    7424 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 10:25:21.894642    7424 kubeadm.go:309] 
	I0610 10:25:21.895685    7424 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 10:25:21.895685    7424 kubeadm.go:309] 
	I0610 10:25:21.895685    7424 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 10:25:21.895685    7424 kubeadm.go:309] 
	I0610 10:25:21.895685    7424 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 10:25:21.896269    7424 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 10:25:21.896269    7424 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 10:25:21.896269    7424 kubeadm.go:309] 
	I0610 10:25:21.896506    7424 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 10:25:21.896577    7424 kubeadm.go:309] 
	I0610 10:25:21.896733    7424 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 10:25:21.896733    7424 kubeadm.go:309] 
	I0610 10:25:21.896910    7424 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 10:25:21.896910    7424 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 10:25:21.896910    7424 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 10:25:21.896910    7424 kubeadm.go:309] 
	I0610 10:25:21.896910    7424 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 10:25:21.897481    7424 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 10:25:21.897481    7424 kubeadm.go:309] 
	I0610 10:25:21.897814    7424 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token v2rrx1.jdv5ix5bnxru9gjs \
	I0610 10:25:21.897814    7424 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 \
	I0610 10:25:21.897814    7424 kubeadm.go:309] 	--control-plane 
	I0610 10:25:21.898386    7424 kubeadm.go:309] 
	I0610 10:25:21.898832    7424 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 10:25:21.898832    7424 kubeadm.go:309] 
	I0610 10:25:21.898832    7424 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token v2rrx1.jdv5ix5bnxru9gjs \
	I0610 10:25:21.898832    7424 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 10:25:21.899721    7424 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 10:25:21.899806    7424 cni.go:84] Creating CNI manager for ""
	I0610 10:25:21.899806    7424 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:25:21.905514    7424 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 10:25:21.920067    7424 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 10:25:21.940897    7424 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 10:25:21.986525    7424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 10:25:21.999534    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-987700 minikube.k8s.io/updated_at=2024_06_10T10_25_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=addons-987700 minikube.k8s.io/primary=true
	I0610 10:25:22.000543    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:22.008538    7424 ops.go:34] apiserver oom_adj: -16
	I0610 10:25:22.198664    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:22.706054    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:23.206835    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:23.709951    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:24.213042    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:24.703501    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:25.203765    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:25.708187    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:26.213187    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:26.713150    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:27.209369    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:27.713110    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:28.200575    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:28.706346    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:29.208346    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:29.714498    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:30.212058    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:30.710866    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:31.213728    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:31.712147    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:32.203452    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:32.713185    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:33.200848    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:33.705918    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:34.212614    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:34.700497    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:35.202221    7424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:25:35.374005    7424 kubeadm.go:1107] duration metric: took 13.3873711s to wait for elevateKubeSystemPrivileges
	W0610 10:25:35.374141    7424 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 10:25:35.374254    7424 kubeadm.go:393] duration metric: took 28.325686s to StartCluster
	I0610 10:25:35.374281    7424 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:35.374548    7424 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:25:35.375456    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:25:35.377167    7424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 10:25:35.377167    7424 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.154.55 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:25:35.379450    7424 out.go:177] * Verifying Kubernetes components...
	I0610 10:25:35.377167    7424 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0610 10:25:35.377815    7424 config.go:182] Loaded profile config "addons-987700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 10:25:35.386402    7424 addons.go:69] Setting yakd=true in profile "addons-987700"
	I0610 10:25:35.386402    7424 addons.go:69] Setting storage-provisioner=true in profile "addons-987700"
	I0610 10:25:35.386402    7424 addons.go:234] Setting addon storage-provisioner=true in "addons-987700"
	I0610 10:25:35.386402    7424 addons.go:69] Setting inspektor-gadget=true in profile "addons-987700"
	I0610 10:25:35.386402    7424 addons.go:234] Setting addon inspektor-gadget=true in "addons-987700"
	I0610 10:25:35.386402    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.386402    7424 addons.go:69] Setting ingress=true in profile "addons-987700"
	I0610 10:25:35.386402    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.386402    7424 addons.go:69] Setting cloud-spanner=true in profile "addons-987700"
	I0610 10:25:35.386961    7424 addons.go:234] Setting addon cloud-spanner=true in "addons-987700"
	I0610 10:25:35.386402    7424 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-987700"
	I0610 10:25:35.387157    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.387237    7424 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-987700"
	I0610 10:25:35.387433    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.386402    7424 addons.go:69] Setting metrics-server=true in profile "addons-987700"
	I0610 10:25:35.387595    7424 addons.go:234] Setting addon metrics-server=true in "addons-987700"
	I0610 10:25:35.386402    7424 addons.go:69] Setting ingress-dns=true in profile "addons-987700"
	I0610 10:25:35.387834    7424 addons.go:234] Setting addon ingress-dns=true in "addons-987700"
	I0610 10:25:35.387915    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.388038    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.386402    7424 addons.go:234] Setting addon yakd=true in "addons-987700"
	I0610 10:25:35.388394    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.386402    7424 addons.go:69] Setting default-storageclass=true in profile "addons-987700"
	I0610 10:25:35.388548    7424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-987700"
	I0610 10:25:35.386402    7424 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-987700"
	I0610 10:25:35.388768    7424 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-987700"
	I0610 10:25:35.386402    7424 addons.go:69] Setting gcp-auth=true in profile "addons-987700"
	I0610 10:25:35.389033    7424 mustload.go:65] Loading cluster: addons-987700
	I0610 10:25:35.386402    7424 addons.go:69] Setting volcano=true in profile "addons-987700"
	I0610 10:25:35.389033    7424 addons.go:234] Setting addon volcano=true in "addons-987700"
	I0610 10:25:35.389033    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.386402    7424 addons.go:69] Setting helm-tiller=true in profile "addons-987700"
	I0610 10:25:35.389033    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.389033    7424 addons.go:234] Setting addon helm-tiller=true in "addons-987700"
	I0610 10:25:35.390002    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.389033    7424 config.go:182] Loaded profile config "addons-987700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 10:25:35.386402    7424 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-987700"
	I0610 10:25:35.390002    7424 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-987700"
	I0610 10:25:35.390002    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.386402    7424 addons.go:69] Setting registry=true in profile "addons-987700"
	I0610 10:25:35.390002    7424 addons.go:234] Setting addon registry=true in "addons-987700"
	I0610 10:25:35.386961    7424 addons.go:234] Setting addon ingress=true in "addons-987700"
	I0610 10:25:35.390002    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.390002    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.391007    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.391007    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.392083    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.393008    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.386402    7424 addons.go:69] Setting volumesnapshots=true in profile "addons-987700"
	I0610 10:25:35.394007    7424 addons.go:234] Setting addon volumesnapshots=true in "addons-987700"
	I0610 10:25:35.394007    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:35.395020    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.395020    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.396024    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.398018    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.416014    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.416431    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.416665    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.416665    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.417021    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.417021    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.420015    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:35.435289    7424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:25:36.202031    7424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 10:25:36.506060    7424 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.0707625s)
	I0610 10:25:36.544975    7424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:25:38.164947    7424 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.9628994s)
	I0610 10:25:38.164947    7424 start.go:946] {"host.minikube.internal": 172.17.144.1} host record injected into CoreDNS's ConfigMap
	I0610 10:25:38.168909    7424 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.6238878s)
	I0610 10:25:38.184377    7424 node_ready.go:35] waiting up to 6m0s for node "addons-987700" to be "Ready" ...
	I0610 10:25:38.474966    7424 node_ready.go:49] node "addons-987700" has status "Ready":"True"
	I0610 10:25:38.474966    7424 node_ready.go:38] duration metric: took 290.5861ms for node "addons-987700" to be "Ready" ...
	I0610 10:25:38.474966    7424 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:25:38.523905    7424 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8xjrw" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:39.318353    7424 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-987700" context rescaled to 1 replicas
	I0610 10:25:40.715776    7424 pod_ready.go:102] pod "coredns-7db6d8ff4d-8xjrw" in "kube-system" namespace has status "Ready":"False"
	I0610 10:25:42.423793    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:42.423793    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:42.432297    7424 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:25:42.429916    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:42.442795    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:42.442795    7424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:25:42.443805    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 10:25:42.443805    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:42.456856    7424 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 10:25:42.467674    7424 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 10:25:42.467674    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 10:25:42.467674    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:42.983679    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:42.983679    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:42.986372    7424 addons.go:234] Setting addon default-storageclass=true in "addons-987700"
	I0610 10:25:42.986372    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:42.987362    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:42.991368    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:42.992404    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:42.995678    7424 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0610 10:25:43.001486    7424 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0610 10:25:43.001486    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0610 10:25:43.001592    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.003387    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.003387    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.007427    7424 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0610 10:25:43.013370    7424 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0610 10:25:43.013370    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0610 10:25:43.013370    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.028256    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.028256    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.037261    7424 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0610 10:25:43.048001    7424 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0610 10:25:43.066507    7424 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0610 10:25:43.089376    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.089376    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.089376    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.089376    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.092376    7424 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0610 10:25:43.089376    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.095962    7424 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0610 10:25:43.096370    7424 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0610 10:25:43.096370    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.096370    7424 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0610 10:25:43.096370    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.102846    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:43.104578    7424 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0610 10:25:43.109523    7424 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0610 10:25:43.135578    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0610 10:25:43.135578    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.135578    7424 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0610 10:25:43.151591    7424 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 10:25:43.151591    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0610 10:25:43.151591    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.196595    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.196595    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.213075    7424 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0610 10:25:43.202576    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.202576    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.202576    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.213075    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.219071    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.231285    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:43.238057    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.238057    7424 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0610 10:25:43.238057    7424 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0610 10:25:43.238057    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.237060    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.237060    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.237060    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.237060    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.237060    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:43.263060    7424 pod_ready.go:102] pod "coredns-7db6d8ff4d-8xjrw" in "kube-system" namespace has status "Ready":"False"
	I0610 10:25:43.273060    7424 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0610 10:25:43.276098    7424 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-987700"
	I0610 10:25:43.307052    7424 out.go:177]   - Using image docker.io/registry:2.8.3
	I0610 10:25:43.307052    7424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 10:25:43.312262    7424 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0610 10:25:43.312262    7424 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0610 10:25:43.312262    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:25:43.315046    7424 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0610 10:25:43.319782    7424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0610 10:25:43.320783    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.330847    7424 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0610 10:25:43.333013    7424 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0610 10:25:43.333013    7424 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 10:25:43.340317    7424 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0610 10:25:43.340317    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0610 10:25:43.375317    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.384314    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.392412    7424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 10:25:43.402318    7424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 10:25:43.402318    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0610 10:25:43.422956    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.426968    7424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 10:25:43.437969    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:43.572694    7424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 10:25:43.654694    7424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 10:25:43.683993    7424 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 10:25:43.749306    7424 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 10:25:43.799699    7424 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 10:25:43.842081    7424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 10:25:43.842081    7424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 10:25:43.842081    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:45.931574    7424 pod_ready.go:102] pod "coredns-7db6d8ff4d-8xjrw" in "kube-system" namespace has status "Ready":"False"
	I0610 10:25:48.256564    7424 pod_ready.go:102] pod "coredns-7db6d8ff4d-8xjrw" in "kube-system" namespace has status "Ready":"False"
	I0610 10:25:49.185947    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:49.185947    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:49.187131    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:49.320829    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:49.321519    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:49.321519    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:49.341847    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:49.342773    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:49.342773    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:49.595394    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:49.595394    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:49.595394    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:49.632413    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:49.632413    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:49.632413    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:49.680012    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:49.680012    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:49.680012    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:49.731540    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:49.731540    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:49.731540    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:49.782753    7424 pod_ready.go:92] pod "coredns-7db6d8ff4d-8xjrw" in "kube-system" namespace has status "Ready":"True"
	I0610 10:25:49.782753    7424 pod_ready.go:81] duration metric: took 11.2587558s for pod "coredns-7db6d8ff4d-8xjrw" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:49.782753    7424 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dxppl" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:49.870827    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:49.870827    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:49.874828    7424 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0610 10:25:49.876828    7424 out.go:177]   - Using image docker.io/busybox:stable
	I0610 10:25:49.880775    7424 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0610 10:25:49.880775    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0610 10:25:49.880775    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:49.921798    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:49.921798    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:49.921798    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:49.936798    7424 pod_ready.go:92] pod "coredns-7db6d8ff4d-dxppl" in "kube-system" namespace has status "Ready":"True"
	I0610 10:25:49.936798    7424 pod_ready.go:81] duration metric: took 154.0435ms for pod "coredns-7db6d8ff4d-dxppl" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:49.936798    7424 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-987700" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.053996    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:50.053996    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:50.054978    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:50.200184    7424 pod_ready.go:92] pod "etcd-addons-987700" in "kube-system" namespace has status "Ready":"True"
	I0610 10:25:50.200184    7424 pod_ready.go:81] duration metric: took 263.3841ms for pod "etcd-addons-987700" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.200184    7424 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-987700" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.273677    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:50.273677    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:50.273677    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:50.307070    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:50.307070    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:50.307070    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:50.381206    7424 pod_ready.go:92] pod "kube-apiserver-addons-987700" in "kube-system" namespace has status "Ready":"True"
	I0610 10:25:50.381206    7424 pod_ready.go:81] duration metric: took 181.02ms for pod "kube-apiserver-addons-987700" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.381206    7424 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-987700" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.450430    7424 pod_ready.go:92] pod "kube-controller-manager-addons-987700" in "kube-system" namespace has status "Ready":"True"
	I0610 10:25:50.450430    7424 pod_ready.go:81] duration metric: took 69.2235ms for pod "kube-controller-manager-addons-987700" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.450430    7424 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k8k5q" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.512427    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:50.512427    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:50.512427    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:50.521640    7424 pod_ready.go:92] pod "kube-proxy-k8k5q" in "kube-system" namespace has status "Ready":"True"
	I0610 10:25:50.521640    7424 pod_ready.go:81] duration metric: took 71.2101ms for pod "kube-proxy-k8k5q" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.521640    7424 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-987700" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.561050    7424 pod_ready.go:92] pod "kube-scheduler-addons-987700" in "kube-system" namespace has status "Ready":"True"
	I0610 10:25:50.561050    7424 pod_ready.go:81] duration metric: took 39.4098ms for pod "kube-scheduler-addons-987700" in "kube-system" namespace to be "Ready" ...
	I0610 10:25:50.561050    7424 pod_ready.go:38] duration metric: took 12.0859857s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:25:50.562694    7424 api_server.go:52] waiting for apiserver process to appear ...
	I0610 10:25:50.587377    7424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:25:50.684064    7424 api_server.go:72] duration metric: took 15.3067709s to wait for apiserver process to appear ...
	I0610 10:25:50.684064    7424 api_server.go:88] waiting for apiserver healthz status ...
	I0610 10:25:50.684064    7424 api_server.go:253] Checking apiserver healthz at https://172.17.154.55:8443/healthz ...
	I0610 10:25:50.722435    7424 api_server.go:279] https://172.17.154.55:8443/healthz returned 200:
	ok
	I0610 10:25:50.731441    7424 api_server.go:141] control plane version: v1.30.1
	I0610 10:25:50.731441    7424 api_server.go:131] duration metric: took 47.3768ms to wait for apiserver health ...
	I0610 10:25:50.731441    7424 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 10:25:50.761865    7424 system_pods.go:59] 7 kube-system pods found
	I0610 10:25:50.761865    7424 system_pods.go:61] "coredns-7db6d8ff4d-8xjrw" [58b9025d-7a6a-41ca-9abb-48ded3ad0646] Running
	I0610 10:25:50.761865    7424 system_pods.go:61] "coredns-7db6d8ff4d-dxppl" [c91ed441-d7dd-48d4-a8b7-df07f07eb753] Running
	I0610 10:25:50.761865    7424 system_pods.go:61] "etcd-addons-987700" [e995b570-8b93-4333-a08d-e4076abb8eb8] Running
	I0610 10:25:50.761865    7424 system_pods.go:61] "kube-apiserver-addons-987700" [f9d93a29-003c-4850-803d-c58b412f5802] Running
	I0610 10:25:50.761865    7424 system_pods.go:61] "kube-controller-manager-addons-987700" [8fb26bcf-6731-44b6-848d-136a4a69a967] Running
	I0610 10:25:50.761865    7424 system_pods.go:61] "kube-proxy-k8k5q" [19e8e7e1-8c01-49b1-bbe9-326be6650da5] Running
	I0610 10:25:50.761865    7424 system_pods.go:61] "kube-scheduler-addons-987700" [f23e9f4e-8eaa-4ea4-a78b-db4b4f13f207] Running
	I0610 10:25:50.761865    7424 system_pods.go:74] duration metric: took 30.4234ms to wait for pod list to return data ...
	I0610 10:25:50.761865    7424 default_sa.go:34] waiting for default service account to be created ...
	I0610 10:25:50.809563    7424 default_sa.go:45] found service account: "default"
	I0610 10:25:50.809563    7424 default_sa.go:55] duration metric: took 47.6982ms for default service account to be created ...
	I0610 10:25:50.809915    7424 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 10:25:50.869555    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:50.869686    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:50.869837    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:51.005049    7424 system_pods.go:86] 7 kube-system pods found
	I0610 10:25:51.005049    7424 system_pods.go:89] "coredns-7db6d8ff4d-8xjrw" [58b9025d-7a6a-41ca-9abb-48ded3ad0646] Running
	I0610 10:25:51.005049    7424 system_pods.go:89] "coredns-7db6d8ff4d-dxppl" [c91ed441-d7dd-48d4-a8b7-df07f07eb753] Running
	I0610 10:25:51.005049    7424 system_pods.go:89] "etcd-addons-987700" [e995b570-8b93-4333-a08d-e4076abb8eb8] Running
	I0610 10:25:51.005049    7424 system_pods.go:89] "kube-apiserver-addons-987700" [f9d93a29-003c-4850-803d-c58b412f5802] Running
	I0610 10:25:51.005049    7424 system_pods.go:89] "kube-controller-manager-addons-987700" [8fb26bcf-6731-44b6-848d-136a4a69a967] Running
	I0610 10:25:51.005049    7424 system_pods.go:89] "kube-proxy-k8k5q" [19e8e7e1-8c01-49b1-bbe9-326be6650da5] Running
	I0610 10:25:51.005049    7424 system_pods.go:89] "kube-scheduler-addons-987700" [f23e9f4e-8eaa-4ea4-a78b-db4b4f13f207] Running
	I0610 10:25:51.005049    7424 system_pods.go:126] duration metric: took 195.1328ms to wait for k8s-apps to be running ...
	I0610 10:25:51.005049    7424 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 10:25:51.029541    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:25:51.110998    7424 system_svc.go:56] duration metric: took 105.9484ms WaitForService to wait for kubelet
	I0610 10:25:51.110998    7424 kubeadm.go:576] duration metric: took 15.733702s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:25:51.110998    7424 node_conditions.go:102] verifying NodePressure condition ...
	I0610 10:25:51.284603    7424 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:25:51.284603    7424 node_conditions.go:123] node cpu capacity is 2
	I0610 10:25:51.284603    7424 node_conditions.go:105] duration metric: took 173.6028ms to run NodePressure ...
	I0610 10:25:51.284603    7424 start.go:240] waiting for startup goroutines ...
	I0610 10:25:51.362378    7424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 10:25:51.362378    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:54.477771    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:54.477771    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:54.477771    7424 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 10:25:54.477771    7424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 10:25:54.477771    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:25:56.733351    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:56.734543    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:56.734543    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:57.031780    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:57.031780    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:57.032613    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:57.105809    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:57.105809    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:57.109859    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:57.152223    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:57.152781    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:57.152781    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:57.274901    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:25:57.278905    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:57.278905    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:57.278905    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:57.556551    7424 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0610 10:25:57.556551    7424 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0610 10:25:57.570851    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 10:25:57.585863    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:57.585863    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:57.586871    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:57.713406    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:57.713406    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:57.713406    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:57.774207    7424 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0610 10:25:57.774207    7424 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0610 10:25:57.783875    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:57.783875    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:57.785206    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:57.905022    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:57.905087    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:57.905401    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:57.999029    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:57.999029    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:58.000027    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:58.060772    7424 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0610 10:25:58.060772    7424 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0610 10:25:58.074758    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:58.074758    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:58.075764    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:58.081766    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0610 10:25:58.166748    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:58.167085    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:58.167275    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:58.238439    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:58.238439    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:58.238439    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:58.285010    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0610 10:25:58.301245    7424 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0610 10:25:58.301245    7424 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0610 10:25:58.304250    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:58.304250    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:58.305246    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:58.321246    7424 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0610 10:25:58.322256    7424 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0610 10:25:58.514853    7424 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0610 10:25:58.514941    7424 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0610 10:25:58.550004    7424 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0610 10:25:58.550004    7424 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0610 10:25:58.566315    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 10:25:58.720605    7424 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0610 10:25:58.721594    7424 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0610 10:25:58.824322    7424 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0610 10:25:58.824322    7424 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0610 10:25:58.825312    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 10:25:58.911467    7424 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0610 10:25:58.911562    7424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0610 10:25:58.953483    7424 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0610 10:25:58.953563    7424 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0610 10:25:58.975115    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0610 10:25:59.097083    7424 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0610 10:25:59.097083    7424 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0610 10:25:59.126847    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:25:59.126847    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:59.126847    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:25:59.131838    7424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 10:25:59.131838    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0610 10:25:59.153508    7424 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0610 10:25:59.153563    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0610 10:25:59.213139    7424 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0610 10:25:59.213139    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0610 10:25:59.245852    7424 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0610 10:25:59.245915    7424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0610 10:25:59.359881    7424 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0610 10:25:59.359881    7424 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0610 10:25:59.490028    7424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 10:25:59.490028    7424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 10:25:59.649953    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0610 10:25:59.705070    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0610 10:25:59.716509    7424 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0610 10:25:59.716631    7424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0610 10:25:59.726544    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:25:59.726544    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:25:59.727471    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:25:59.746481    7424 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 10:25:59.746671    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0610 10:25:59.915736    7424 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 10:25:59.915736    7424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 10:25:59.992923    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 10:26:00.133390    7424 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0610 10:26:00.133479    7424 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0610 10:26:00.168537    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 10:26:00.460692    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:26:00.460692    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:26:00.461691    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:26:00.608163    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:26:00.608355    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:26:00.608712    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:26:00.740921    7424 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 10:26:00.740921    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0610 10:26:00.759948    7424 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 10:26:00.759948    7424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 10:26:00.985195    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.7101716s)
	I0610 10:26:01.301983    7424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 10:26:01.301983    7424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 10:26:01.330556    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 10:26:01.506074    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0610 10:26:01.753209    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.1822429s)
	I0610 10:26:01.910010    7424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 10:26:02.011398    7424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 10:26:02.011469    7424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 10:26:02.158185    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:26:02.158504    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:26:02.159334    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:26:02.635678    7424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 10:26:02.635831    7424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 10:26:02.714380    7424 addons.go:234] Setting addon gcp-auth=true in "addons-987700"
	I0610 10:26:02.714602    7424 host.go:66] Checking if "addons-987700" exists ...
	I0610 10:26:02.715080    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:26:03.107106    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.822057s)
	I0610 10:26:03.107106    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.0252993s)
	I0610 10:26:03.690758    7424 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 10:26:03.690758    7424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 10:26:03.957379    7424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 10:26:03.957379    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 10:26:04.620150    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 10:26:04.626882    7424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 10:26:04.626882    7424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 10:26:05.399662    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:26:05.399769    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:26:05.415037    7424 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 10:26:05.415037    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-987700 ).state
	I0610 10:26:05.747260    7424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 10:26:05.747329    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 10:26:06.346861    7424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 10:26:06.346861    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 10:26:07.164005    7424 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 10:26:07.164081    7424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 10:26:08.130778    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 10:26:08.200963    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:26:08.200963    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:26:08.200963    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-987700 ).networkadapters[0]).ipaddresses[0]
	I0610 10:26:11.135979    7424 main.go:141] libmachine: [stdout =====>] : 172.17.154.55
	
	I0610 10:26:11.135979    7424 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:26:11.135979    7424 sshutil.go:53] new ssh client: &{IP:172.17.154.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-987700\id_rsa Username:docker}
	I0610 10:26:13.074663    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (14.5082294s)
	I0610 10:26:13.074663    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (14.2492345s)
	I0610 10:26:13.074663    7424 addons.go:475] Verifying addon ingress=true in "addons-987700"
	I0610 10:26:13.078927    7424 out.go:177] * Verifying ingress addon...
	I0610 10:26:13.082853    7424 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 10:26:13.101771    7424 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 10:26:13.102350    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:13.605018    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:14.119461    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:14.753216    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:15.101623    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:15.882975    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:16.115370    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:16.592521    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:16.754446    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (17.7791851s)
	I0610 10:26:16.754570    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (17.1044768s)
	I0610 10:26:16.754570    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (17.0493596s)
	I0610 10:26:16.754638    7424 addons.go:475] Verifying addon registry=true in "addons-987700"
	I0610 10:26:16.759539    7424 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-987700 service yakd-dashboard -n yakd-dashboard
	
	I0610 10:26:16.761820    7424 out.go:177] * Verifying registry addon...
	I0610 10:26:16.754751    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (16.7615769s)
	I0610 10:26:16.754751    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (16.5850808s)
	I0610 10:26:16.755080    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (15.4243972s)
	I0610 10:26:16.755229    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (15.2489565s)
	I0610 10:26:16.755256    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.1350068s)
	W0610 10:26:16.764564    7424 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 10:26:16.765568    7424 addons.go:475] Verifying addon metrics-server=true in "addons-987700"
	I0610 10:26:16.767575    7424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 10:26:16.767575    7424 retry.go:31] will retry after 198.772532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 10:26:16.791423    7424 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 10:26:16.791423    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0610 10:26:16.803641    7424 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0610 10:26:16.984171    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 10:26:17.094448    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:17.303044    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:17.658758    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:17.794115    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.6632581s)
	I0610 10:26:17.794115    7424 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-987700"
	I0610 10:26:17.794115    7424 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (12.378976s)
	I0610 10:26:17.804483    7424 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 10:26:17.807460    7424 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0610 10:26:17.813832    7424 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0610 10:26:17.812130    7424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 10:26:17.819696    7424 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 10:26:17.819764    7424 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 10:26:17.867338    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:17.870388    7424 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 10:26:17.870388    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:18.021992    7424 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 10:26:18.022102    7424 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 10:26:18.146998    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:18.218442    7424 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 10:26:18.218442    7424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0610 10:26:18.288875    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:18.358439    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:18.431831    7424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 10:26:18.604685    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:18.783026    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:18.826544    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:19.094480    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:19.288784    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:19.332841    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:19.604504    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:19.717433    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.7332392s)
	I0610 10:26:19.780738    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:19.826210    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:20.118887    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:20.296524    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:20.362165    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:20.408561    7424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.9767133s)
	I0610 10:26:20.417735    7424 addons.go:475] Verifying addon gcp-auth=true in "addons-987700"
	I0610 10:26:20.421705    7424 out.go:177] * Verifying gcp-auth addon...
	I0610 10:26:20.427405    7424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 10:26:20.448258    7424 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 10:26:20.598243    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:20.787073    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:20.834738    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:21.092276    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:21.281523    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:21.329110    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:21.599437    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:21.788453    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:21.835416    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:22.102332    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:22.279362    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:22.327053    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:22.595350    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:22.782550    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:22.833384    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:23.098332    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:23.286811    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:23.333602    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:23.600322    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:23.779057    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:23.829667    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:24.098325    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:24.288925    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:24.322084    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:24.589622    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:24.782379    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:24.830743    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:25.096588    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:25.288105    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:25.332308    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:25.606804    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:25.776956    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:25.826573    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:26.091923    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:26.284617    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:26.330466    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:26.596546    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:26.781244    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:26.828964    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:27.094338    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:27.286097    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:27.330683    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:27.599387    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:27.787989    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:27.821874    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:28.104266    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:28.289938    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:28.642709    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:28.650285    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:28.790789    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:28.839103    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:29.105942    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:29.279502    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:29.323245    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:29.592491    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:29.781723    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:29.829351    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:30.101350    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:30.290039    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:30.337503    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:30.603702    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:30.776870    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:30.823994    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:31.094874    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:31.424796    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:31.425670    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:31.594361    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:31.783168    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:31.831319    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:32.096454    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:32.280700    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:32.330333    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:32.596736    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:32.783231    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:32.844321    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:33.102731    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:33.285333    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:33.383862    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:33.611520    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:33.777330    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:33.836810    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:34.106655    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:34.286283    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:34.330283    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:34.598880    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:34.775887    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:34.825434    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:35.088323    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:35.280464    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:35.329458    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:35.599168    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:35.789747    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:35.821325    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:36.090348    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:36.279975    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:36.326970    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:36.597189    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:36.789793    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:36.842769    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:37.104551    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:37.279867    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:37.325873    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:37.593804    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:37.784827    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:37.830959    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:38.098334    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:38.288493    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:38.337041    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:38.602789    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:38.778978    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:38.830251    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:39.097251    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:39.291587    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:39.339587    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:39.602268    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:39.777429    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:39.826583    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:40.093285    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:40.288341    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:40.334926    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:40.600545    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:40.776237    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:40.827054    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:41.092641    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:41.284894    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:41.335728    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:41.598802    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:41.787826    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:41.837170    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:42.104079    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:42.278713    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:42.326392    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:42.595875    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:42.786352    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:42.834922    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:43.102005    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:43.289597    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:43.337637    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:43.590380    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:43.776902    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:43.824873    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:44.093461    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:44.285338    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:44.332410    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:44.600511    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:44.775154    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:44.822850    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:45.090496    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:45.283456    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:45.329687    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:45.596835    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:45.788342    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:45.836422    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:46.103356    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:46.280083    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:46.331086    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:46.598446    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:46.790602    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:46.840766    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:47.089729    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:47.280565    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:47.328774    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:47.597417    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:47.788389    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:47.822051    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:48.091325    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:48.284809    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:48.330153    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:48.590707    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:48.779473    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:48.829761    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:49.098705    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:49.288236    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:49.337375    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:49.602851    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:49.779672    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:49.829143    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:50.092531    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:50.281613    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:50.330491    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:50.594834    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:50.785491    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:50.833063    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:51.127741    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:51.277723    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:51.326783    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:51.593322    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:51.781827    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:51.828788    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:52.098624    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:52.287842    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:52.322968    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:52.589414    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:52.781712    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:52.831403    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:53.097402    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:53.482171    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:53.486609    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:53.664052    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:53.969010    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:53.969344    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:54.096625    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:54.282855    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:54.732819    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:54.736409    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:54.932137    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:54.936481    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:55.097823    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:55.289385    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:55.348724    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:55.602895    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:55.784599    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:55.830088    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:56.092455    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:56.283911    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:56.334267    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:56.602635    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:56.780625    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:56.828335    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:57.095544    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:57.285676    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:57.332621    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:57.598820    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:57.790385    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:57.837081    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:58.093191    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:58.283888    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:58.329788    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:58.595407    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:58.785653    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:58.832673    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:59.097671    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:59.286138    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:59.332235    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:26:59.596690    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:26:59.787076    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:26:59.834046    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:00.099078    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:00.288249    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:00.336035    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:00.590771    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:00.780838    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:00.830818    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:01.097144    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:01.294134    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:01.354097    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:01.593996    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:01.783682    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:01.832228    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:02.098183    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:02.302173    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:02.338934    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:02.603937    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:02.780589    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:02.824456    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:03.094500    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:03.285166    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:03.335861    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:03.602494    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:03.777065    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:03.827940    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:04.095140    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:04.285948    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:04.334163    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:04.603300    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:04.780536    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:04.828105    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:05.093277    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:05.285661    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:05.332188    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:05.598961    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:05.790201    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:05.836472    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:06.090639    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:06.283071    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:06.331758    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:06.599454    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:06.789348    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:06.837152    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:07.110654    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:07.279387    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:07.328865    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:07.591206    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:07.822262    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:07.828535    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:08.119408    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:08.281765    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:08.332519    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:08.594558    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:08.788609    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:08.831580    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:09.098787    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:09.289273    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:09.341356    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:09.607754    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:09.798750    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:09.864977    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:10.148846    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:10.310290    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:10.327741    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:10.592479    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:10.785246    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:10.833794    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:11.098726    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:11.289520    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:11.322896    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:11.591895    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:11.783429    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:11.829135    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:12.090428    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:12.290727    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:12.338483    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:12.591570    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:12.782504    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:12.830847    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:13.097352    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:13.311497    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:13.339579    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:13.593147    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:13.781761    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:13.828357    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:14.101363    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:14.289585    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:14.336713    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:14.589400    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:14.778525    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:14.827560    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:15.093033    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:15.281904    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:15.329919    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:15.599978    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:15.775413    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:15.821362    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:16.096205    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:16.286746    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:16.331815    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:16.600021    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:16.789079    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:16.837969    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:17.091480    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:17.284390    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:17.331979    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:17.601710    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:17.775901    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:17.825560    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:18.136114    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:18.369952    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:18.370933    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:18.599574    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:18.790300    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:18.822176    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:19.092192    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:19.284582    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:19.333123    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:19.601331    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:19.789234    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:19.821573    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:20.090023    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:20.983511    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:20.988624    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:20.994335    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:21.013498    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:21.014768    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:21.093559    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:21.285780    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:21.333864    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:21.602808    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:21.776133    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:21.824816    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:22.106065    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:22.277830    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:22.327292    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:22.598022    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:22.781591    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:22.831314    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:23.097337    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:23.290071    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:23.446175    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:23.593848    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:23.786530    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:23.842872    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:24.109292    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:24.274503    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:24.323355    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:24.590968    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:24.785543    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:24.834656    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:25.100236    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:25.290915    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:25.321361    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:25.605889    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:25.778094    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:25.826866    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:26.093517    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:26.284990    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:26.334205    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:26.606183    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:26.778345    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:26.826493    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:27.093714    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:27.286124    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:27.334125    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:27.602169    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:27.778973    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:27.829594    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:28.092258    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:28.282435    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:28.326802    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:28.594605    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:28.783220    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:28.833669    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:29.098024    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:29.288735    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:29.341227    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:29.602805    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:30.022146    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:30.022878    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:30.103013    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:30.380689    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:30.381707    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:30.600642    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:30.792005    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:30.837889    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:31.123779    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:31.292985    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:31.346079    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:31.590078    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:31.792929    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:31.827070    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:32.092125    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:32.287378    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:32.332579    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:32.599624    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:32.776392    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:32.826475    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:33.095725    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:33.280775    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:33.327927    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:33.593097    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:33.784878    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:33.839621    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:34.110420    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:34.289931    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:34.326363    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:34.604373    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:34.778618    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:27:34.826476    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:35.094456    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:35.285259    7424 kapi.go:107] duration metric: took 1m18.5170403s to wait for kubernetes.io/minikube-addons=registry ...
	I0610 10:27:35.335694    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:35.596478    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:35.836657    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:36.104278    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:36.326053    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:36.606551    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:36.827941    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:37.096473    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:37.340481    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:37.589483    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:37.829311    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:38.096176    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:38.336515    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:38.601814    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:38.836645    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:39.100342    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:39.333412    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:39.599729    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:39.839009    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:40.103154    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:40.333654    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:40.593911    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:40.845102    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:41.125115    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:41.326553    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:41.608759    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:41.830251    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:42.096838    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:42.338776    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:42.604317    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:42.829719    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:43.092977    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:43.334422    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:43.590237    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:43.834322    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:44.099063    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:44.338415    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:44.602616    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:44.827146    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:45.093109    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:45.337700    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:45.603557    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:45.824934    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:46.091833    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:46.331554    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:46.598721    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:46.823860    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:47.090254    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:47.333915    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:47.598306    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:47.841720    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:48.090011    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:48.329896    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:48.599082    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:48.846045    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:49.101142    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:49.338720    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:49.659088    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:49.839670    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:50.103782    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:50.330861    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:50.603381    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:50.832118    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:51.101508    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:51.337291    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:51.609120    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:51.828105    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:52.101291    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:52.335586    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:52.791093    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:52.837651    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:53.107722    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:53.325740    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:53.592757    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:53.829571    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:54.098276    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:54.338867    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:54.604014    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:54.829952    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:55.098802    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:55.325757    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:55.607888    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:55.828905    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:56.092438    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:56.329926    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:56.598666    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:56.837056    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:57.104545    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:57.327633    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:57.596767    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:57.846147    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:58.103035    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:58.325448    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:58.605845    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:58.838712    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:59.094374    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:59.333375    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:27:59.602021    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:27:59.838411    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:00.103774    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:00.329363    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:00.592306    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:00.835111    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:01.100236    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:01.322828    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:01.603032    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:01.828234    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:02.097007    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:02.327997    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:02.595113    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:02.834834    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:03.097858    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:03.333628    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:03.594471    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:03.839862    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:04.102395    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:04.327700    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:04.592268    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:04.828316    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:05.107668    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:05.335401    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:05.603427    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:05.828717    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:06.093093    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:06.333562    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:06.605192    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:06.824248    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:07.091656    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:07.326183    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:07.598867    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:07.840537    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:08.102265    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:08.327076    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:08.593187    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:08.835189    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:09.517637    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:09.523542    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:09.606867    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:09.828850    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:10.093868    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:10.334304    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:10.598921    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:10.837455    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:11.166424    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:11.327155    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:11.593760    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:11.835797    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:12.103324    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:12.331550    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:12.601925    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:12.969911    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:13.100914    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:13.337945    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:13.592136    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:13.839326    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:14.103896    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:14.335978    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:14.603048    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:14.838570    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:15.100375    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:15.341371    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:15.603367    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:15.834028    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:16.096774    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:16.332200    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:16.601031    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:16.824764    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:17.090341    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:17.330357    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:17.598960    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:17.835675    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:18.132956    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:18.343683    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:18.602523    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:18.837626    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:19.095129    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:19.335462    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:19.600465    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:19.825095    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:20.093411    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:20.332853    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:20.599555    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:20.823512    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:21.091926    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:21.427154    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:21.597302    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:21.835607    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:22.104790    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:22.337888    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:22.615212    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:22.832091    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:23.096383    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:23.340287    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:23.590691    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:23.828076    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:24.096350    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:24.335483    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:24.601281    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:25.047149    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:25.107055    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:25.326116    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:25.596776    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:25.835828    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:26.098414    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:26.363530    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:26.605936    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:26.827412    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:27.092045    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:27.333557    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:27.599683    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:27.825737    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:28.106009    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:28.375593    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:28.592286    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:28.831314    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:29.100204    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:29.323576    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:29.590903    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:29.829997    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:30.098228    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:30.335671    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:30.616710    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:30.842392    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:31.093443    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:31.332327    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:31.597890    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:31.838557    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:32.105390    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:32.328882    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:32.592353    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:32.832905    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:33.101274    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:33.332146    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:33.595982    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:33.831837    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:34.113323    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:34.337156    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:34.601517    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:34.831959    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:35.099333    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:35.401322    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:35.606987    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:35.830141    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:36.099016    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:36.337509    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:36.591351    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:36.830055    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:37.110223    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:37.347935    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:37.600548    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:37.825493    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:38.093927    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:38.333830    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:38.598699    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:38.836642    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:39.103181    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:39.329229    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:39.597545    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:39.839657    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:40.401864    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:40.575483    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:40.791883    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:40.834799    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:41.133809    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:41.340327    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:41.602903    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:41.826237    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:42.095239    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:42.330809    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:42.603342    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:42.827286    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:43.092775    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:43.331609    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:43.600704    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:43.837948    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:44.102972    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:44.330911    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:44.598588    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:44.837391    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:45.103600    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:45.329723    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:45.595526    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:45.838305    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:46.092686    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:46.330041    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:46.600952    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:46.838808    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:47.105037    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:47.331326    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:47.598131    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:47.837126    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:48.482727    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:48.482727    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:48.608559    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:49.984478    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:49.999479    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:50.000303    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:50.017790    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:50.116826    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:50.328456    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:50.601242    7424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:28:50.873907    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:51.103282    7424 kapi.go:107] duration metric: took 2m38.0191334s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0610 10:28:51.327320    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:51.829949    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:52.336547    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:52.825322    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:53.339643    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:53.838100    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:54.332023    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:54.831350    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:55.336847    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:55.852942    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:56.329989    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:56.839418    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:57.326816    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:57.829676    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:58.335221    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:58.827105    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:59.335100    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:28:59.827128    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:00.534696    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:00.841035    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:01.326093    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:01.831729    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:02.339247    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:02.829422    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:03.335718    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:04.081581    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:04.324695    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:04.465602    7424 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 10:29:04.465665    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:04.831044    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:04.936578    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:05.327134    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:29:05.445620    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:05.839436    7424 kapi.go:107] duration metric: took 2m48.0259282s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 10:29:05.945581    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:06.446647    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:06.946121    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:07.444259    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:07.947450    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:08.448784    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:08.936760    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:09.437420    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:09.936512    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:10.447744    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:10.946626    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:11.443688    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:11.944267    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:12.444724    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:12.943976    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:13.445494    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:13.935411    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:14.435625    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:14.943102    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:15.437437    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:15.939283    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:16.442516    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:16.942303    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:17.441928    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:17.946663    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:18.442424    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:18.945847    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:19.441392    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:19.940620    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:20.442086    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:20.941569    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:21.439712    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:21.943568    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:22.451235    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:22.938248    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:23.437618    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:23.946731    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:24.450158    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:24.942087    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:25.447347    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:25.936959    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:26.448906    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:26.942801    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:27.437112    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:27.948495    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:28.438268    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:28.941133    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:29.439995    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:29.943274    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:30.446084    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:30.946611    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:31.445554    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:31.950650    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:32.447908    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:32.946861    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:33.446317    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:33.949304    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:34.436839    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:34.939739    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:35.445097    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:35.946654    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:36.439090    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:36.944199    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:37.449810    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:37.938108    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:38.442138    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:38.936599    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:39.445245    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:39.948344    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:40.448785    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:40.937100    7424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:29:41.443324    7424 kapi.go:107] duration metric: took 3m21.0137346s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 10:29:41.446321    7424 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-987700 cluster.
	I0610 10:29:41.449168    7424 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 10:29:41.451300    7424 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 10:29:41.453961    7424 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, helm-tiller, cloud-spanner, volcano, inspektor-gadget, yakd, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0610 10:29:41.457895    7424 addons.go:510] duration metric: took 4m6.0787114s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin helm-tiller cloud-spanner volcano inspektor-gadget yakd metrics-server storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0610 10:29:41.457895    7424 start.go:245] waiting for cluster config update ...
	I0610 10:29:41.457895    7424 start.go:254] writing updated cluster config ...
	I0610 10:29:41.470568    7424 ssh_runner.go:195] Run: rm -f paused
	I0610 10:29:41.744764    7424 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 10:29:41.752913    7424 out.go:177] * Done! kubectl is now configured to use "addons-987700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 10 10:30:37 addons-987700 dockerd[1334]: time="2024-06-10T10:30:37.634398201Z" level=warning msg="cleaning up after shim disconnected" id=e995ace01bc6a4ff3ad101ff948a1a5add8d3c56bd560ac8024c4927158cb55c namespace=moby
	Jun 10 10:30:37 addons-987700 dockerd[1334]: time="2024-06-10T10:30:37.634546503Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 10:30:37 addons-987700 dockerd[1328]: time="2024-06-10T10:30:37.634912109Z" level=info msg="ignoring event" container=e995ace01bc6a4ff3ad101ff948a1a5add8d3c56bd560ac8024c4927158cb55c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 10:30:38 addons-987700 dockerd[1328]: time="2024-06-10T10:30:38.000203650Z" level=info msg="ignoring event" container=2ecc7b941f1a87670b99b0691d81af506baa01994740327d17a55b4061d5baa4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 10:30:38 addons-987700 dockerd[1334]: time="2024-06-10T10:30:38.005725131Z" level=info msg="shim disconnected" id=2ecc7b941f1a87670b99b0691d81af506baa01994740327d17a55b4061d5baa4 namespace=moby
	Jun 10 10:30:38 addons-987700 dockerd[1334]: time="2024-06-10T10:30:38.006544343Z" level=warning msg="cleaning up after shim disconnected" id=2ecc7b941f1a87670b99b0691d81af506baa01994740327d17a55b4061d5baa4 namespace=moby
	Jun 10 10:30:38 addons-987700 dockerd[1334]: time="2024-06-10T10:30:38.006562344Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 10:30:39 addons-987700 dockerd[1334]: time="2024-06-10T10:30:39.359497111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 10:30:39 addons-987700 dockerd[1334]: time="2024-06-10T10:30:39.361967746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 10:30:39 addons-987700 dockerd[1334]: time="2024-06-10T10:30:39.361997147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:30:39 addons-987700 dockerd[1334]: time="2024-06-10T10:30:39.363458168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:30:39 addons-987700 cri-dockerd[1235]: time="2024-06-10T10:30:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b32f53aa12d742b53a9acdaa61a35f872b4b8df163d45c0e7ea758f7e864923/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 10:30:40 addons-987700 cri-dockerd[1235]: time="2024-06-10T10:30:40Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Jun 10 10:30:40 addons-987700 dockerd[1334]: time="2024-06-10T10:30:40.540366447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 10:30:40 addons-987700 dockerd[1334]: time="2024-06-10T10:30:40.541145940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 10:30:40 addons-987700 dockerd[1334]: time="2024-06-10T10:30:40.541502937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:30:40 addons-987700 dockerd[1334]: time="2024-06-10T10:30:40.543853016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:30:48 addons-987700 dockerd[1328]: time="2024-06-10T10:30:48.071076879Z" level=info msg="ignoring event" container=82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 10:30:48 addons-987700 dockerd[1334]: time="2024-06-10T10:30:48.073855355Z" level=info msg="shim disconnected" id=82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331 namespace=moby
	Jun 10 10:30:48 addons-987700 dockerd[1334]: time="2024-06-10T10:30:48.074265152Z" level=warning msg="cleaning up after shim disconnected" id=82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331 namespace=moby
	Jun 10 10:30:48 addons-987700 dockerd[1334]: time="2024-06-10T10:30:48.074429950Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 10:30:48 addons-987700 dockerd[1328]: time="2024-06-10T10:30:48.293912584Z" level=info msg="ignoring event" container=4b32f53aa12d742b53a9acdaa61a35f872b4b8df163d45c0e7ea758f7e864923 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 10:30:48 addons-987700 dockerd[1334]: time="2024-06-10T10:30:48.295371471Z" level=info msg="shim disconnected" id=4b32f53aa12d742b53a9acdaa61a35f872b4b8df163d45c0e7ea758f7e864923 namespace=moby
	Jun 10 10:30:48 addons-987700 dockerd[1334]: time="2024-06-10T10:30:48.295948167Z" level=warning msg="cleaning up after shim disconnected" id=4b32f53aa12d742b53a9acdaa61a35f872b4b8df163d45c0e7ea758f7e864923 namespace=moby
	Jun 10 10:30:48 addons-987700 dockerd[1334]: time="2024-06-10T10:30:48.296976958Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	a7f217ba08228       a416a98b71e22                                                                                                                                29 seconds ago       Exited              helper-pod                               0                   916f01b714ae3       helper-pod-delete-pvc-160300bd-a1e2-4e63-bf32-0e1d8c304ff0
	aa9003bf2f5af       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:40402d51273ea7d281392557096333b5f62316a684f9bc9252214243840f757e                            42 seconds ago       Exited              gadget                                   4                   993a6008d5537       gadget-9p5pw
	3b42111483e6d       nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d                                                                44 seconds ago       Running             nginx                                    0                   c12759f2f07f1       test-job-nginx-0
	0db48a668963c       busybox@sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7                                                              44 seconds ago       Exited              busybox                                  0                   c368715bd1e06       test-local-path
	0b6c6c1d0148f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   455262ee6a9ba       gcp-auth-5db96cd9b4-t8jkg
	45f7b1a3dc118       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   9b5671a018eac       csi-hostpathplugin-fqgj7
	d2574784201b9       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   9b5671a018eac       csi-hostpathplugin-fqgj7
	a85623c85e087       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   9b5671a018eac       csi-hostpathplugin-fqgj7
	048a8b1a90a53       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           2 minutes ago        Running             hostpath                                 0                   9b5671a018eac       csi-hostpathplugin-fqgj7
	4cd74f4dcc6b1       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             2 minutes ago        Running             controller                               0                   52fdd95acab6f       ingress-nginx-controller-768f948f8f-kwc5w
	bd95d2da49e89       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   9b5671a018eac       csi-hostpathplugin-fqgj7
	6a88b09765138       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   9b5671a018eac       csi-hostpathplugin-fqgj7
	503b142e26a55       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   354439159e0a2       csi-hostpath-resizer-0
	aba08725f6c0a       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   db98e0ef40390       csi-hostpath-attacher-0
	364a80c9065df       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              patch                                    0                   292107e2a53f7       ingress-nginx-admission-patch-6tk2n
	491f898907622       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   65cb8a91d6472       ingress-nginx-admission-create-k9dhb
	b202ee143fdf5       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   5d2846a33883e       snapshot-controller-745499f584-dtfpk
	8d764ebbd4ed6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   e660dc2660b21       snapshot-controller-745499f584-rj28s
	78b2afdb57b6e       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       3 minutes ago        Running             local-path-provisioner                   0                   3f8aec6039207       local-path-provisioner-8d985888d-hkfr7
	11c71970ddf38       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        3 minutes ago        Running             yakd                                     0                   4bac1c8f7ebcf       yakd-dashboard-5ddbf7d777-jgmcm
	9aaa9e5bf1614       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        3 minutes ago        Running             metrics-server                           0                   355183281b0c2       metrics-server-c59844bb4-nx77k
	154f39327ffc3       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               3 minutes ago        Running             cloud-spanner-emulator                   0                   f473b4a7fa98d       cloud-spanner-emulator-6fcd4f6f98-cjknl
	db94be243eaf3       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  3 minutes ago        Running             tiller                                   0                   c6f994907521e       tiller-deploy-6677d64bcd-zkbpr
	587677ac78b7f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             4 minutes ago        Running             minikube-ingress-dns                     0                   4f7e50a8c478b       kube-ingress-dns-minikube
	ccdf9c76ba8e6       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     4 minutes ago        Running             nvidia-device-plugin-ctr                 0                   bfc44eb808757       nvidia-device-plugin-daemonset-k8grz
	d3fe0c1a7148d       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   6ed274891e983       storage-provisioner
	13fd55bd5a5d5       cbb01a7bd410d                                                                                                                                5 minutes ago        Running             coredns                                  0                   0de6fb7735cdd       coredns-7db6d8ff4d-8xjrw
	bc5c3526da140       747097150317f                                                                                                                                5 minutes ago        Running             kube-proxy                               0                   ac6d2f920943c       kube-proxy-k8k5q
	c02646a6ecbd3       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   335fc1ab6cb26       etcd-addons-987700
	324712fc8dfe3       a52dc94f0a912                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   7fdde6c2aaee8       kube-scheduler-addons-987700
	191a1759136fb       25a1387cdab82                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   eee13b8800eb5       kube-controller-manager-addons-987700
	52b9242130506       91be940803172                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   b84970c6e31b0       kube-apiserver-addons-987700
	
	
	==> controller_ingress [4cd74f4dcc6b] <==
	W0610 10:28:50.331830       8 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0610 10:28:50.332371       8 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0610 10:28:50.343962       8 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.1" state="clean" commit="6911225c3f747e1cd9d109c305436d08b668f086" platform="linux/amd64"
	I0610 10:28:50.644854       8 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0610 10:28:50.712216       8 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0610 10:28:50.759734       8 nginx.go:264] "Starting NGINX Ingress controller"
	I0610 10:28:50.827030       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a0886ffb-deb0-45b6-bfdf-47f77c5910b9", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0610 10:28:50.842789       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"d362d807-35ca-4e6a-8dd6-ef3239525c20", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0610 10:28:50.842829       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"598cc0e1-5d9c-4131-bc7c-dcb1081958d8", APIVersion:"v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0610 10:28:51.969036       8 nginx.go:307] "Starting NGINX process"
	I0610 10:28:51.969347       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0610 10:28:51.972041       8 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0610 10:28:51.972265       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0610 10:28:51.994062       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0610 10:28:51.994309       8 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-kwc5w"
	I0610 10:28:52.001473       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-kwc5w" node="addons-987700"
	I0610 10:28:52.062731       8 controller.go:210] "Backend successfully reloaded"
	I0610 10:28:52.067142       8 controller.go:221] "Initial sync, sleeping for 1 second"
	I0610 10:28:52.068412       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-kwc5w", UID:"bfe962eb-9b67-466c-96ab-ce2e48697367", APIVersion:"v1", ResourceVersion:"755", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [13fd55bd5a5d] <==
	[INFO] 10.244.0.7:51486 - 50007 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191203s
	[INFO] 10.244.0.7:48533 - 64252 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059401s
	[INFO] 10.244.0.7:48533 - 45555 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000302004s
	[INFO] 10.244.0.7:50068 - 53817 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000181003s
	[INFO] 10.244.0.7:50068 - 10303 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117202s
	[INFO] 10.244.0.7:34710 - 63287 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000341406s
	[INFO] 10.244.0.7:34710 - 39217 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000108602s
	[INFO] 10.244.0.7:55765 - 21654 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000058701s
	[INFO] 10.244.0.7:55765 - 12691 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097702s
	[INFO] 10.244.0.7:38775 - 40883 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125402s
	[INFO] 10.244.0.7:38775 - 57014 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000096102s
	[INFO] 10.244.0.7:60844 - 11628 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000524s
	[INFO] 10.244.0.7:60844 - 60433 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091501s
	[INFO] 10.244.0.7:38214 - 55740 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049901s
	[INFO] 10.244.0.7:38214 - 26299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161603s
	[INFO] 10.244.0.26:35106 - 60934 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000377605s
	[INFO] 10.244.0.26:40544 - 63451 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000167403s
	[INFO] 10.244.0.26:33761 - 57536 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116401s
	[INFO] 10.244.0.26:38718 - 52903 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157803s
	[INFO] 10.244.0.26:33831 - 20954 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093002s
	[INFO] 10.244.0.26:60403 - 15914 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000415307s
	[INFO] 10.244.0.26:47775 - 38598 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.001638224s
	[INFO] 10.244.0.26:51941 - 14470 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.001217919s
	[INFO] 10.244.0.29:34290 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000461607s
	[INFO] 10.244.0.29:45799 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000333705s
	
	
	==> describe nodes <==
	Name:               addons-987700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-987700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=addons-987700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_25_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-987700
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-987700"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-987700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:30:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:30:29 +0000   Mon, 10 Jun 2024 10:25:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:30:29 +0000   Mon, 10 Jun 2024 10:25:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:30:29 +0000   Mon, 10 Jun 2024 10:25:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:30:29 +0000   Mon, 10 Jun 2024 10:25:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.154.55
	  Hostname:    addons-987700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbffb6f6aa18414ea86d3da6ece37843
	  System UUID:                e9e4daa8-4aca-0149-820b-6b8e7752ff33
	  Boot ID:                    b24fa8d3-c81d-49d9-bf83-2be9b6b6e204
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-cjknl      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  gadget                      gadget-9p5pw                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  gcp-auth                    gcp-auth-5db96cd9b4-t8jkg                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-kwc5w    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m42s
	  kube-system                 coredns-7db6d8ff4d-8xjrw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m19s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 csi-hostpathplugin-fqgj7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 etcd-addons-987700                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m35s
	  kube-system                 kube-apiserver-addons-987700                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 kube-controller-manager-addons-987700        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-k8k5q                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-scheduler-addons-987700                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 metrics-server-c59844bb4-nx77k               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m46s
	  kube-system                 nvidia-device-plugin-daemonset-k8grz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 snapshot-controller-745499f584-dtfpk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 snapshot-controller-745499f584-rj28s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 tiller-deploy-6677d64bcd-zkbpr               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  local-path-storage          local-path-provisioner-8d985888d-hkfr7       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  my-volcano                  test-job-nginx-0                             1 (50%!)(MISSING)       1 (50%!)(MISSING)     0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-jgmcm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1950m (97%!)(MISSING)  1 (50%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m41s (x8 over 5m42s)  kubelet          Node addons-987700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s (x8 over 5m42s)  kubelet          Node addons-987700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m42s)  kubelet          Node addons-987700 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m33s                  kubelet          Node addons-987700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s                  kubelet          Node addons-987700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s                  kubelet          Node addons-987700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m30s                  kubelet          Node addons-987700 status is now: NodeReady
	  Normal  RegisteredNode           5m20s                  node-controller  Node addons-987700 event: Registered Node addons-987700 in Controller
	
	
	==> dmesg <==
	[Jun10 10:26] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.068731] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.657223] kauditd_printk_skb: 102 callbacks suppressed
	[ +16.597907] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.832994] hrtimer: interrupt took 564209 ns
	[Jun10 10:27] kauditd_printk_skb: 31 callbacks suppressed
	[ +19.134329] kauditd_printk_skb: 2 callbacks suppressed
	[Jun10 10:28] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.615250] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.993173] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.412966] kauditd_printk_skb: 34 callbacks suppressed
	[ +19.385655] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.022099] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.409215] kauditd_printk_skb: 14 callbacks suppressed
	[Jun10 10:29] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.758341] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.601351] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.756428] kauditd_printk_skb: 27 callbacks suppressed
	[Jun10 10:30] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.927293] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.450051] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.401140] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.233597] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.049216] kauditd_printk_skb: 26 callbacks suppressed
	[ +11.437818] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [c02646a6ecbd] <==
	{"level":"info","ts":"2024-06-10T10:30:01.027275Z","caller":"traceutil/trace.go:171","msg":"trace[453256260] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1668; }","duration":"130.336568ms","start":"2024-06-10T10:30:00.896932Z","end":"2024-06-10T10:30:01.027268Z","steps":["trace[453256260] 'agreement among raft nodes before linearized reading'  (duration: 130.244066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:30:01.028377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.764058ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12930"}
	{"level":"info","ts":"2024-06-10T10:30:01.028404Z","caller":"traceutil/trace.go:171","msg":"trace[285255107] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1668; }","duration":"192.823358ms","start":"2024-06-10T10:30:00.835573Z","end":"2024-06-10T10:30:01.028397Z","steps":["trace[285255107] 'agreement among raft nodes before linearized reading'  (duration: 192.740357ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:30:01.185548Z","caller":"traceutil/trace.go:171","msg":"trace[999613899] transaction","detail":"{read_only:false; response_revision:1669; number_of_response:1; }","duration":"140.107521ms","start":"2024-06-10T10:30:01.045417Z","end":"2024-06-10T10:30:01.185524Z","steps":["trace[999613899] 'process raft request'  (duration: 129.151447ms)","trace[999613899] 'compare'  (duration: 10.602568ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T10:30:06.668685Z","caller":"traceutil/trace.go:171","msg":"trace[1616971319] linearizableReadLoop","detail":"{readStateIndex:1760; appliedIndex:1759; }","duration":"417.601753ms","start":"2024-06-10T10:30:06.251065Z","end":"2024-06-10T10:30:06.668667Z","steps":["trace[1616971319] 'read index received'  (duration: 376.505848ms)","trace[1616971319] 'applied index is now lower than readState.Index'  (duration: 41.095305ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:30:06.6691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"418.018959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/my-volcano/test-job-f6581cd1-e8d1-4d8c-854a-5c3a78546247.17d79de76bbb2d83\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2024-06-10T10:30:06.669155Z","caller":"traceutil/trace.go:171","msg":"trace[741858953] range","detail":"{range_begin:/registry/events/my-volcano/test-job-f6581cd1-e8d1-4d8c-854a-5c3a78546247.17d79de76bbb2d83; range_end:; response_count:1; response_revision:1683; }","duration":"418.11406ms","start":"2024-06-10T10:30:06.251032Z","end":"2024-06-10T10:30:06.669146Z","steps":["trace[741858953] 'agreement among raft nodes before linearized reading'  (duration: 417.918257ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:30:06.669265Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:30:06.251018Z","time spent":"418.233762ms","remote":"127.0.0.1:42832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":92,"response count":1,"response size":744,"request content":"key:\"/registry/events/my-volcano/test-job-f6581cd1-e8d1-4d8c-854a-5c3a78546247.17d79de76bbb2d83\" "}
	{"level":"info","ts":"2024-06-10T10:30:06.669728Z","caller":"traceutil/trace.go:171","msg":"trace[1316289929] transaction","detail":"{read_only:false; response_revision:1683; number_of_response:1; }","duration":"461.128395ms","start":"2024-06-10T10:30:06.208588Z","end":"2024-06-10T10:30:06.669717Z","steps":["trace[1316289929] 'process raft request'  (duration: 419.073175ms)","trace[1316289929] 'compare'  (duration: 40.926403ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:30:06.669929Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:30:06.208554Z","time spent":"461.282296ms","remote":"127.0.0.1:43014","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1669 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:450 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-06-10T10:30:07.194721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.791885ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11286741953036575846 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/my-volcano/test-job-f6581cd1-e8d1-4d8c-854a-5c3a78546247.17d79de76bbb2d83\" mod_revision:1678 > success:<request_put:<key:\"/registry/events/my-volcano/test-job-f6581cd1-e8d1-4d8c-854a-5c3a78546247.17d79de76bbb2d83\" value_size:598 lease:2063369916181799395 >> failure:<request_range:<key:\"/registry/events/my-volcano/test-job-f6581cd1-e8d1-4d8c-854a-5c3a78546247.17d79de76bbb2d83\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-10T10:30:07.195829Z","caller":"traceutil/trace.go:171","msg":"trace[1459034518] linearizableReadLoop","detail":"{readStateIndex:1761; appliedIndex:1760; }","duration":"419.082373ms","start":"2024-06-10T10:30:06.776732Z","end":"2024-06-10T10:30:07.195814Z","steps":["trace[1459034518] 'read index received'  (duration: 249.134171ms)","trace[1459034518] 'applied index is now lower than readState.Index'  (duration: 169.946402ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T10:30:07.195996Z","caller":"traceutil/trace.go:171","msg":"trace[625107902] transaction","detail":"{read_only:false; response_revision:1684; number_of_response:1; }","duration":"523.39781ms","start":"2024-06-10T10:30:06.672581Z","end":"2024-06-10T10:30:07.195978Z","steps":["trace[625107902] 'process raft request'  (duration: 353.282605ms)","trace[625107902] 'compare'  (duration: 168.671283ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:30:07.196349Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:30:06.672562Z","time spent":"523.703014ms","remote":"127.0.0.1:42832","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":706,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/my-volcano/test-job-f6581cd1-e8d1-4d8c-854a-5c3a78546247.17d79de76bbb2d83\" mod_revision:1678 > success:<request_put:<key:\"/registry/events/my-volcano/test-job-f6581cd1-e8d1-4d8c-854a-5c3a78546247.17d79de76bbb2d83\" value_size:598 lease:2063369916181799395 >> failure:<request_range:<key:\"/registry/events/my-volcano/test-job-f6581cd1-e8d1-4d8c-854a-5c3a78546247.17d79de76bbb2d83\" > >"}
	{"level":"warn","ts":"2024-06-10T10:30:07.19673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"379.111784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12930"}
	{"level":"info","ts":"2024-06-10T10:30:07.196821Z","caller":"traceutil/trace.go:171","msg":"trace[1034200704] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1684; }","duration":"379.307587ms","start":"2024-06-10T10:30:06.817502Z","end":"2024-06-10T10:30:07.19681Z","steps":["trace[1034200704] 'agreement among raft nodes before linearized reading'  (duration: 379.121184ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:30:07.200252Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:30:06.817488Z","time spent":"382.750937ms","remote":"127.0.0.1:42924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":12953,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-06-10T10:30:07.19806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"421.320106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3625"}
	{"level":"info","ts":"2024-06-10T10:30:07.200557Z","caller":"traceutil/trace.go:171","msg":"trace[1032690897] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1684; }","duration":"423.839443ms","start":"2024-06-10T10:30:06.776699Z","end":"2024-06-10T10:30:07.200539Z","steps":["trace[1032690897] 'agreement among raft nodes before linearized reading'  (duration: 421.219804ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:30:07.200736Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:30:06.776683Z","time spent":"424.038146ms","remote":"127.0.0.1:42924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":3648,"request content":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" "}
	{"level":"warn","ts":"2024-06-10T10:30:07.198385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"301.528141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12930"}
	{"level":"info","ts":"2024-06-10T10:30:07.201228Z","caller":"traceutil/trace.go:171","msg":"trace[245584590] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1684; }","duration":"304.371283ms","start":"2024-06-10T10:30:06.896847Z","end":"2024-06-10T10:30:07.201218Z","steps":["trace[245584590] 'agreement among raft nodes before linearized reading'  (duration: 301.45004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:30:07.201407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:30:06.896802Z","time spent":"304.594185ms","remote":"127.0.0.1:42924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":12953,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-06-10T10:30:35.554563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.699792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/jobs.batch.volcano.sh\" ","response":"range_response_count:1 size:521740"}
	{"level":"info","ts":"2024-06-10T10:30:35.554697Z","caller":"traceutil/trace.go:171","msg":"trace[1377375471] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/jobs.batch.volcano.sh; range_end:; response_count:1; response_revision:1848; }","duration":"120.901696ms","start":"2024-06-10T10:30:35.433746Z","end":"2024-06-10T10:30:35.554648Z","steps":["trace[1377375471] 'range keys from bolt db'  (duration: 120.621992ms)"],"step_count":1}
	
	
	==> gcp-auth [0b6c6c1d0148] <==
	2024/06/10 10:29:40 GCP Auth Webhook started!
	2024/06/10 10:29:48 Ready to marshal response ...
	2024/06/10 10:29:48 Ready to write response ...
	2024/06/10 10:29:48 Ready to marshal response ...
	2024/06/10 10:29:48 Ready to write response ...
	2024/06/10 10:29:48 Ready to marshal response ...
	2024/06/10 10:29:48 Ready to write response ...
	2024/06/10 10:29:54 Ready to marshal response ...
	2024/06/10 10:29:54 Ready to write response ...
	2024/06/10 10:29:58 Ready to marshal response ...
	2024/06/10 10:29:58 Ready to write response ...
	2024/06/10 10:29:59 Ready to marshal response ...
	2024/06/10 10:29:59 Ready to write response ...
	2024/06/10 10:30:24 Ready to marshal response ...
	2024/06/10 10:30:24 Ready to write response ...
	2024/06/10 10:30:38 Ready to marshal response ...
	2024/06/10 10:30:38 Ready to write response ...
	
	
	==> kernel <==
	 10:30:55 up 7 min,  0 users,  load average: 2.75, 2.49, 1.25
	Linux addons-987700 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [52b924213050] <==
	I0610 10:30:14.131493       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0610 10:30:35.776014       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0610 10:30:35.999013       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0610 10:30:36.445139       1 trace.go:236] Trace[3368491]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:6350ed6a-b520-4dae-a49a-7800c31d5824,client:127.0.0.1,api-group:apiextensions.k8s.io,api-version:v1,name:jobs.batch.volcano.sh,subresource:status,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/jobs.batch.volcano.sh/status,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PUT (10-Jun-2024 10:30:35.828) (total time: 616ms):
	Trace[3368491]: ---"limitedReadBody succeeded" len:505488 19ms (10:30:35.848)
	Trace[3368491]: ---"Conversion done" 97ms (10:30:35.945)
	Trace[3368491]: ---"Writing http response done" 16ms (10:30:36.445)
	Trace[3368491]: [616.507324ms] [616.507324ms] END
	I0610 10:30:36.757924       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0610 10:30:36.931545       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0610 10:30:37.083268       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	W0610 10:30:37.154376       1 cacher.go:168] Terminating all watchers from cacher commands.bus.volcano.sh
	I0610 10:30:37.229671       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0610 10:30:37.426734       1 trace.go:236] Trace[982616526]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:bfd08ecd-3dcb-4347-8c97-b7898fe2b1b8,client:127.0.0.1,api-group:apiextensions.k8s.io,api-version:v1,name:jobs.batch.volcano.sh,subresource:status,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/jobs.batch.volcano.sh/status,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PUT (10-Jun-2024 10:30:36.594) (total time: 832ms):
	Trace[982616526]: ---"limitedReadBody succeeded" len:505489 53ms (10:30:36.647)
	Trace[982616526]: ["GuaranteedUpdate etcd3" audit-id:bfd08ecd-3dcb-4347-8c97-b7898fe2b1b8,key:/apiextensions.k8s.io/customresourcedefinitions/jobs.batch.volcano.sh,type:*apiextensions.CustomResourceDefinition,resource:customresourcedefinitions.apiextensions.k8s.io 763ms (10:30:36.662)]
	Trace[982616526]: ---"Write to database call succeeded" len:505489 156ms (10:30:37.421)
	Trace[982616526]: [832.075155ms] [832.075155ms] END
	W0610 10:30:38.250606       1 cacher.go:168] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0610 10:30:38.325542       1 cacher.go:168] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0610 10:30:38.484237       1 cacher.go:168] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0610 10:30:38.511360       1 cacher.go:168] Terminating all watchers from cacher queues.scheduling.volcano.sh
	I0610 10:30:40.586852       1 trace.go:236] Trace[571229612]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f7ef9ab0-3dc9-4c4e-87a3-78f3bfec2b1c,client:172.17.154.55,api-group:,api-version:v1,name:,subresource:,namespace:volcano-system,protocol:HTTP/2.0,resource:events,scope:namespace,url:/api/v1/namespaces/volcano-system/events,user-agent:kube-controller-manager/v1.30.1 (linux/amd64) kubernetes/6911225/system:serviceaccount:kube-system:namespace-controller,verb:DELETE (10-Jun-2024 10:30:40.083) (total time: 503ms):
	Trace[571229612]: ---"About to write a response" 499ms (10:30:40.586)
	Trace[571229612]: [503.273678ms] [503.273678ms] END
	
	
	==> kube-controller-manager [191a1759136f] <==
	W0610 10:30:40.985458       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:40.985582       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:41.950559       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:41.950635       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:42.294434       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:42.294569       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:42.451481       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:42.451614       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:42.690880       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:42.690941       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0610 10:30:44.202099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-8d985888d" duration="6.4µs"
	W0610 10:30:45.607495       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:45.608047       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0610 10:30:45.657111       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="volcano-monitoring"
	I0610 10:30:45.925936       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="volcano-system"
	W0610 10:30:46.938729       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:46.939074       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:47.166865       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:47.166978       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:47.657617       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:47.657663       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:48.464669       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:48.464998       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:54.216934       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:54.216978       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [bc5c3526da14] <==
	I0610 10:25:43.888016       1 server_linux.go:69] "Using iptables proxy"
	I0610 10:25:44.017143       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.154.55"]
	I0610 10:25:44.432604       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:25:44.432986       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:25:44.433027       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:25:44.452417       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:25:44.452968       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:25:44.452992       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:25:44.481489       1 config.go:319] "Starting node config controller"
	I0610 10:25:44.483871       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 10:25:44.488121       1 config.go:192] "Starting service config controller"
	I0610 10:25:44.488290       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:25:44.488483       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:25:44.500257       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:25:44.606535       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:25:44.606834       1 shared_informer.go:320] Caches are synced for node config
	I0610 10:25:44.620421       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [324712fc8dfe] <==
	W0610 10:25:18.875799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 10:25:18.876294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 10:25:18.929857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 10:25:18.929907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 10:25:18.971220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 10:25:18.971558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 10:25:18.980298       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:25:18.980354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:25:19.009121       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:25:19.009178       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 10:25:19.034118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 10:25:19.034447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 10:25:19.055153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 10:25:19.055288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 10:25:19.073341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 10:25:19.073491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 10:25:19.105583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 10:25:19.105700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 10:25:19.152731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 10:25:19.152878       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 10:25:19.449992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 10:25:19.450494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 10:25:19.537686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 10:25:19.537745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 10:25:20.828083       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 10:30:38 addons-987700 kubelet[2125]: I0610 10:30:38.942553    2125 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vtlh\" (UniqueName: \"kubernetes.io/projected/705450a4-eed8-46b0-be9b-80463ed35c1b-kube-api-access-4vtlh\") pod \"task-pv-pod-restore\" (UID: \"705450a4-eed8-46b0-be9b-80463ed35c1b\") " pod="default/task-pv-pod-restore"
	Jun 10 10:30:38 addons-987700 kubelet[2125]: I0610 10:30:38.943161    2125 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fec423a5-3123-49fb-8693-126c91ceaad9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7a5ebcef-2714-11ef-8564-ce4f52db6350\") pod \"task-pv-pod-restore\" (UID: \"705450a4-eed8-46b0-be9b-80463ed35c1b\") " pod="default/task-pv-pod-restore"
	Jun 10 10:30:38 addons-987700 kubelet[2125]: I0610 10:30:38.943301    2125 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/705450a4-eed8-46b0-be9b-80463ed35c1b-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"705450a4-eed8-46b0-be9b-80463ed35c1b\") " pod="default/task-pv-pod-restore"
	Jun 10 10:30:39 addons-987700 kubelet[2125]: I0610 10:30:39.059747    2125 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-fec423a5-3123-49fb-8693-126c91ceaad9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7a5ebcef-2714-11ef-8564-ce4f52db6350\") pod \"task-pv-pod-restore\" (UID: \"705450a4-eed8-46b0-be9b-80463ed35c1b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/a1c9a156c8b9a98f2d6febc242b9a7a7fffb0717d3b99dc0062d35d2a5bae8df/globalmount\"" pod="default/task-pv-pod-restore"
	Jun 10 10:30:39 addons-987700 kubelet[2125]: I0610 10:30:39.495181    2125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8353ad-d185-4c85-85f5-82b4723bdd3a" path="/var/lib/kubelet/pods/3f8353ad-d185-4c85-85f5-82b4723bdd3a/volumes"
	Jun 10 10:30:39 addons-987700 kubelet[2125]: I0610 10:30:39.496068    2125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c75a2436-de44-46d8-9833-d3bcb9b7dcbb" path="/var/lib/kubelet/pods/c75a2436-de44-46d8-9833-d3bcb9b7dcbb/volumes"
	Jun 10 10:30:44 addons-987700 kubelet[2125]: I0610 10:30:44.220881    2125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=5.6268579800000005 podStartE2EDuration="6.220858147s" podCreationTimestamp="2024-06-10 10:30:38 +0000 UTC" firstStartedPulling="2024-06-10 10:30:39.7392168 +0000 UTC m=+318.527011385" lastFinishedPulling="2024-06-10 10:30:40.333216867 +0000 UTC m=+319.121011552" observedRunningTime="2024-06-10 10:30:41.387267218 +0000 UTC m=+320.175061903" watchObservedRunningTime="2024-06-10 10:30:44.220858147 +0000 UTC m=+323.008652732"
	Jun 10 10:30:46 addons-987700 kubelet[2125]: I0610 10:30:46.466935    2125 scope.go:117] "RemoveContainer" containerID="aa9003bf2f5afad04688abffab8ba66c46fd40f0b98c11fa7d41a70b389c92b4"
	Jun 10 10:30:46 addons-987700 kubelet[2125]: E0610 10:30:46.467689    2125 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-9p5pw_gadget(acbf4f7a-165b-4a35-8400-23b38e260168)\"" pod="gadget/gadget-9p5pw" podUID="acbf4f7a-165b-4a35-8400-23b38e260168"
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.550043    2125 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7a5ebcef-2714-11ef-8564-ce4f52db6350\") pod \"705450a4-eed8-46b0-be9b-80463ed35c1b\" (UID: \"705450a4-eed8-46b0-be9b-80463ed35c1b\") "
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.550147    2125 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vtlh\" (UniqueName: \"kubernetes.io/projected/705450a4-eed8-46b0-be9b-80463ed35c1b-kube-api-access-4vtlh\") pod \"705450a4-eed8-46b0-be9b-80463ed35c1b\" (UID: \"705450a4-eed8-46b0-be9b-80463ed35c1b\") "
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.550235    2125 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/705450a4-eed8-46b0-be9b-80463ed35c1b-gcp-creds\") pod \"705450a4-eed8-46b0-be9b-80463ed35c1b\" (UID: \"705450a4-eed8-46b0-be9b-80463ed35c1b\") "
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.551181    2125 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705450a4-eed8-46b0-be9b-80463ed35c1b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "705450a4-eed8-46b0-be9b-80463ed35c1b" (UID: "705450a4-eed8-46b0-be9b-80463ed35c1b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.559087    2125 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705450a4-eed8-46b0-be9b-80463ed35c1b-kube-api-access-4vtlh" (OuterVolumeSpecName: "kube-api-access-4vtlh") pod "705450a4-eed8-46b0-be9b-80463ed35c1b" (UID: "705450a4-eed8-46b0-be9b-80463ed35c1b"). InnerVolumeSpecName "kube-api-access-4vtlh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.568670    2125 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^7a5ebcef-2714-11ef-8564-ce4f52db6350" (OuterVolumeSpecName: "task-pv-storage") pod "705450a4-eed8-46b0-be9b-80463ed35c1b" (UID: "705450a4-eed8-46b0-be9b-80463ed35c1b"). InnerVolumeSpecName "pvc-fec423a5-3123-49fb-8693-126c91ceaad9". PluginName "kubernetes.io/csi", VolumeGidValue ""
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.621441    2125 scope.go:117] "RemoveContainer" containerID="82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331"
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.651276    2125 reconciler_common.go:282] "operationExecutor.UnmountDevice started for volume \"pvc-fec423a5-3123-49fb-8693-126c91ceaad9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7a5ebcef-2714-11ef-8564-ce4f52db6350\") on node \"addons-987700\" "
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.651470    2125 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4vtlh\" (UniqueName: \"kubernetes.io/projected/705450a4-eed8-46b0-be9b-80463ed35c1b-kube-api-access-4vtlh\") on node \"addons-987700\" DevicePath \"\""
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.651539    2125 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/705450a4-eed8-46b0-be9b-80463ed35c1b-gcp-creds\") on node \"addons-987700\" DevicePath \"\""
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.684342    2125 scope.go:117] "RemoveContainer" containerID="82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331"
	Jun 10 10:30:48 addons-987700 kubelet[2125]: E0610 10:30:48.687802    2125 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331" containerID="82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331"
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.688195    2125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331"} err="failed to get container status \"82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331\": rpc error: code = Unknown desc = Error response from daemon: No such container: 82ca49377a2054b9bba5f70daad5101174f5a27784e536e1ba585454a527e331"
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.699363    2125 operation_generator.go:1001] UnmountDevice succeeded for volume "pvc-fec423a5-3123-49fb-8693-126c91ceaad9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^7a5ebcef-2714-11ef-8564-ce4f52db6350") on node "addons-987700"
	Jun 10 10:30:48 addons-987700 kubelet[2125]: I0610 10:30:48.753081    2125 reconciler_common.go:289] "Volume detached for volume \"pvc-fec423a5-3123-49fb-8693-126c91ceaad9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7a5ebcef-2714-11ef-8564-ce4f52db6350\") on node \"addons-987700\" DevicePath \"\""
	Jun 10 10:30:49 addons-987700 kubelet[2125]: I0610 10:30:49.493096    2125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="705450a4-eed8-46b0-be9b-80463ed35c1b" path="/var/lib/kubelet/pods/705450a4-eed8-46b0-be9b-80463ed35c1b/volumes"
	
	
	==> storage-provisioner [d3fe0c1a7148] <==
	I0610 10:26:02.314999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 10:26:02.344976       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 10:26:02.345013       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 10:26:02.381920       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 10:26:02.382103       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-987700_0a8ec9d4-91f6-4e62-9d6f-5af799705182!
	I0610 10:26:02.390856       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"55d735be-2f01-46bb-9d15-28ce896c3c33", APIVersion:"v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-987700_0a8ec9d4-91f6-4e62-9d6f-5af799705182 became leader
	I0610 10:26:02.482679       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-987700_0a8ec9d4-91f6-4e62-9d6f-5af799705182!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:30:45.662554    1484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-987700 -n addons-987700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-987700 -n addons-987700: (13.9049261s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-987700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-k9dhb ingress-nginx-admission-patch-6tk2n
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-987700 describe pod ingress-nginx-admission-create-k9dhb ingress-nginx-admission-patch-6tk2n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-987700 describe pod ingress-nginx-admission-create-k9dhb ingress-nginx-admission-patch-6tk2n: exit status 1 (164.2855ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-k9dhb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6tk2n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-987700 describe pod ingress-nginx-admission-create-k9dhb ingress-nginx-admission-patch-6tk2n: exit status 1
--- FAIL: TestAddons/parallel/Registry (89.17s)

                                                
                                    
x
+
TestErrorSpam/setup (208.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-947800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-947800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 --driver=hyperv: (3m28.4308558s)
error_spam_test.go:96: unexpected stderr: "W0610 10:35:03.547517   11224 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-947800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=19046
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-947800" primary control-plane node in "nospam-947800" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-947800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0610 10:35:03.547517   11224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (208.43s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (36.37s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-228600 -n functional-228600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-228600 -n functional-228600: (12.8892638s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 logs -n 25: (9.140355s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-947800 --log_dir                                     | nospam-947800     | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:39 UTC | 10 Jun 24 10:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-947800 --log_dir                                     | nospam-947800     | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:39 UTC | 10 Jun 24 10:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-947800 --log_dir                                     | nospam-947800     | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:40 UTC | 10 Jun 24 10:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-947800 --log_dir                                     | nospam-947800     | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:40 UTC | 10 Jun 24 10:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-947800 --log_dir                                     | nospam-947800     | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:40 UTC | 10 Jun 24 10:41 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-947800 --log_dir                                     | nospam-947800     | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:41 UTC | 10 Jun 24 10:41 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-947800 --log_dir                                     | nospam-947800     | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:41 UTC | 10 Jun 24 10:41 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-947800                                            | nospam-947800     | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:41 UTC | 10 Jun 24 10:41 UTC |
	| start   | -p functional-228600                                        | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:41 UTC | 10 Jun 24 10:45 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-228600                                        | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:45 UTC | 10 Jun 24 10:48 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-228600 cache add                                 | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:48 UTC | 10 Jun 24 10:48 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-228600 cache add                                 | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:48 UTC | 10 Jun 24 10:48 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-228600 cache add                                 | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:48 UTC | 10 Jun 24 10:48 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-228600 cache add                                 | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:48 UTC | 10 Jun 24 10:48 UTC |
	|         | minikube-local-cache-test:functional-228600                 |                   |                   |         |                     |                     |
	| cache   | functional-228600 cache delete                              | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:48 UTC | 10 Jun 24 10:48 UTC |
	|         | minikube-local-cache-test:functional-228600                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:48 UTC | 10 Jun 24 10:48 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:48 UTC | 10 Jun 24 10:48 UTC |
	| ssh     | functional-228600 ssh sudo                                  | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:48 UTC | 10 Jun 24 10:49 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-228600                                           | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:49 UTC | 10 Jun 24 10:49 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-228600 ssh                                       | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:49 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-228600 cache reload                              | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:49 UTC | 10 Jun 24 10:49 UTC |
	| ssh     | functional-228600 ssh                                       | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:49 UTC | 10 Jun 24 10:49 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:49 UTC | 10 Jun 24 10:49 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:49 UTC | 10 Jun 24 10:49 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-228600 kubectl --                                | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:49 UTC | 10 Jun 24 10:49 UTC |
	|         | --context functional-228600                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:45:57
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:45:57.792232    9108 out.go:291] Setting OutFile to fd 876 ...
	I0610 10:45:57.792937    9108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:57.792937    9108 out.go:304] Setting ErrFile to fd 788...
	I0610 10:45:57.792937    9108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:57.816887    9108 out.go:298] Setting JSON to false
	I0610 10:45:57.821858    9108 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16246,"bootTime":1718000111,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 10:45:57.821858    9108 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 10:45:57.827288    9108 out.go:177] * [functional-228600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 10:45:57.832958    9108 notify.go:220] Checking for updates...
	I0610 10:45:57.835988    9108 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:45:57.838894    9108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:45:57.844153    9108 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 10:45:57.848984    9108 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:45:57.851441    9108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:45:57.854920    9108 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 10:45:57.855249    9108 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:46:03.544743    9108 out.go:177] * Using the hyperv driver based on existing profile
	I0610 10:46:03.549175    9108 start.go:297] selected driver: hyperv
	I0610 10:46:03.549175    9108 start.go:901] validating driver "hyperv" against &{Name:functional-228600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-228600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.144.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:46:03.549408    9108 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:46:03.598267    9108 cni.go:84] Creating CNI manager for ""
	I0610 10:46:03.598267    9108 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:46:03.598881    9108 start.go:340] cluster config:
	{Name:functional-228600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-228600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.144.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:46:03.599432    9108 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:46:03.605311    9108 out.go:177] * Starting "functional-228600" primary control-plane node in "functional-228600" cluster
	I0610 10:46:03.607923    9108 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 10:46:03.608556    9108 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 10:46:03.608617    9108 cache.go:56] Caching tarball of preloaded images
	I0610 10:46:03.609015    9108 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 10:46:03.609291    9108 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 10:46:03.609537    9108 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\config.json ...
	I0610 10:46:03.612464    9108 start.go:360] acquireMachinesLock for functional-228600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:46:03.612464    9108 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-228600"
	I0610 10:46:03.612820    9108 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:46:03.612867    9108 fix.go:54] fixHost starting: 
	I0610 10:46:03.613060    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:06.591396    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:06.592036    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:06.592036    9108 fix.go:112] recreateIfNeeded on functional-228600: state=Running err=<nil>
	W0610 10:46:06.592036    9108 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 10:46:06.598495    9108 out.go:177] * Updating the running hyperv "functional-228600" VM ...
	I0610 10:46:06.602893    9108 machine.go:94] provisionDockerMachine start ...
	I0610 10:46:06.602893    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:08.960641    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:08.960641    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:08.961324    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:11.834410    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:11.834460    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:11.840805    9108 main.go:141] libmachine: Using SSH client type: native
	I0610 10:46:11.841295    9108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.165 22 <nil> <nil>}
	I0610 10:46:11.841295    9108 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 10:46:11.985063    9108 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-228600
	
	I0610 10:46:11.985063    9108 buildroot.go:166] provisioning hostname "functional-228600"
	I0610 10:46:11.985063    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:14.329373    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:14.329424    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:14.329424    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:17.088251    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:17.088408    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:17.094113    9108 main.go:141] libmachine: Using SSH client type: native
	I0610 10:46:17.094113    9108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.165 22 <nil> <nil>}
	I0610 10:46:17.094113    9108 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-228600 && echo "functional-228600" | sudo tee /etc/hostname
	I0610 10:46:17.258725    9108 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-228600
	
	I0610 10:46:17.258725    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:19.573100    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:19.573100    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:19.573100    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:22.361096    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:22.361886    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:22.368283    9108 main.go:141] libmachine: Using SSH client type: native
	I0610 10:46:22.368283    9108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.165 22 <nil> <nil>}
	I0610 10:46:22.368283    9108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-228600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-228600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-228600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:46:22.511329    9108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:46:22.511456    9108 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 10:46:22.511556    9108 buildroot.go:174] setting up certificates
	I0610 10:46:22.511585    9108 provision.go:84] configureAuth start
	I0610 10:46:22.511585    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:24.825509    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:24.825746    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:24.825832    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:27.611283    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:27.611384    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:27.611454    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:29.940651    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:29.940716    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:29.940716    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:32.705770    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:32.706182    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:32.706182    9108 provision.go:143] copyHostCerts
	I0610 10:46:32.706385    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 10:46:32.706385    9108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 10:46:32.706385    9108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 10:46:32.707223    9108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 10:46:32.708266    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 10:46:32.708437    9108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 10:46:32.708437    9108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 10:46:32.708437    9108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 10:46:32.709675    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 10:46:32.709985    9108 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 10:46:32.709985    9108 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 10:46:32.709985    9108 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 10:46:32.711019    9108 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-228600 san=[127.0.0.1 172.17.144.165 functional-228600 localhost minikube]
	I0610 10:46:32.822772    9108 provision.go:177] copyRemoteCerts
	I0610 10:46:32.836359    9108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:46:32.836359    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:35.156331    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:35.156331    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:35.157145    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:37.936132    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:37.936132    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:37.937085    9108 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
	I0610 10:46:38.043347    9108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2059568s)
	I0610 10:46:38.043347    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 10:46:38.043347    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:46:38.098100    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 10:46:38.098700    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0610 10:46:38.152807    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 10:46:38.153066    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:46:38.215392    9108 provision.go:87] duration metric: took 15.7036797s to configureAuth
	I0610 10:46:38.215392    9108 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:46:38.216037    9108 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 10:46:38.216037    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:40.560833    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:40.561665    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:40.561665    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:43.355419    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:43.355419    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:43.361716    9108 main.go:141] libmachine: Using SSH client type: native
	I0610 10:46:43.362504    9108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.165 22 <nil> <nil>}
	I0610 10:46:43.362504    9108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 10:46:43.513761    9108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 10:46:43.513761    9108 buildroot.go:70] root file system type: tmpfs
	I0610 10:46:43.513761    9108 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 10:46:43.513761    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:45.772906    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:45.772906    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:45.773016    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:48.476449    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:48.476449    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:48.486712    9108 main.go:141] libmachine: Using SSH client type: native
	I0610 10:46:48.486712    9108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.165 22 <nil> <nil>}
	I0610 10:46:48.487613    9108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 10:46:48.651784    9108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 10:46:48.651784    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:50.948304    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:50.948304    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:50.948562    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:53.744293    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:53.744293    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:53.750198    9108 main.go:141] libmachine: Using SSH client type: native
	I0610 10:46:53.751019    9108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.165 22 <nil> <nil>}
	I0610 10:46:53.751090    9108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 10:46:53.911761    9108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:46:53.911838    9108 machine.go:97] duration metric: took 47.3084852s to provisionDockerMachine
	I0610 10:46:53.911838    9108 start.go:293] postStartSetup for "functional-228600" (driver="hyperv")
	I0610 10:46:53.911907    9108 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:46:53.924698    9108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:46:53.924698    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:46:56.195757    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:46:56.195757    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:56.196051    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:46:58.874801    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:46:58.874801    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:46:58.874801    9108 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
	I0610 10:46:58.983046    9108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0583073s)
	I0610 10:46:58.995593    9108 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:46:59.003226    9108 command_runner.go:130] > NAME=Buildroot
	I0610 10:46:59.003226    9108 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 10:46:59.003314    9108 command_runner.go:130] > ID=buildroot
	I0610 10:46:59.003314    9108 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 10:46:59.003314    9108 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 10:46:59.003314    9108 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:46:59.003314    9108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 10:46:59.003314    9108 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 10:46:59.005275    9108 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 10:46:59.005275    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 10:46:59.006299    9108 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\7548\hosts -> hosts in /etc/test/nested/copy/7548
	I0610 10:46:59.006374    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\7548\hosts -> /etc/test/nested/copy/7548/hosts
	I0610 10:46:59.018430    9108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/7548
	I0610 10:46:59.042247    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 10:46:59.098778    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\7548\hosts --> /etc/test/nested/copy/7548/hosts (40 bytes)
	I0610 10:46:59.148872    9108 start.go:296] duration metric: took 5.2369913s for postStartSetup
	I0610 10:46:59.148872    9108 fix.go:56] duration metric: took 55.5356019s for fixHost
	I0610 10:46:59.149194    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:47:01.443645    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:47:01.443645    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:01.444539    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:47:04.192499    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:47:04.192499    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:04.198924    9108 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:04.199675    9108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.165 22 <nil> <nil>}
	I0610 10:47:04.199675    9108 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:47:04.338923    9108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718016424.347587893
	
	I0610 10:47:04.338923    9108 fix.go:216] guest clock: 1718016424.347587893
	I0610 10:47:04.338923    9108 fix.go:229] Guest: 2024-06-10 10:47:04.347587893 +0000 UTC Remote: 2024-06-10 10:46:59.1488724 +0000 UTC m=+61.534825001 (delta=5.198715493s)
	I0610 10:47:04.338923    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:47:06.672859    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:47:06.673807    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:06.673904    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:47:09.439804    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:47:09.439804    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:09.446134    9108 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:09.446302    9108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.165 22 <nil> <nil>}
	I0610 10:47:09.446302    9108 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718016424
	I0610 10:47:09.610250    9108 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 10:47:04 UTC 2024
	
	I0610 10:47:09.610343    9108 fix.go:236] clock set: Mon Jun 10 10:47:04 UTC 2024
	 (err=<nil>)
	I0610 10:47:09.610343    9108 start.go:83] releasing machines lock for "functional-228600", held for 1m5.9971336s
	I0610 10:47:09.610511    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:47:11.891443    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:47:11.891443    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:11.891562    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:47:14.638106    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:47:14.638106    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:14.642814    9108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:47:14.642891    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:47:14.653361    9108 ssh_runner.go:195] Run: cat /version.json
	I0610 10:47:14.653361    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:47:17.028178    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:47:17.028178    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:47:17.028178    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:17.028178    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:17.028178    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:47:17.028178    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:47:19.899955    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:47:19.900964    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:19.901122    9108 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
	I0610 10:47:19.926449    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:47:19.926449    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:47:19.927513    9108 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
	I0610 10:47:20.008511    9108 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 10:47:20.008511    9108 ssh_runner.go:235] Completed: cat /version.json: (5.3551065s)
	I0610 10:47:20.020903    9108 ssh_runner.go:195] Run: systemctl --version
	I0610 10:47:20.071796    9108 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 10:47:20.071796    9108 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4288605s)
	I0610 10:47:20.071796    9108 command_runner.go:130] > systemd 252 (252)
	I0610 10:47:20.071796    9108 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 10:47:20.085844    9108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 10:47:20.098874    9108 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 10:47:20.099126    9108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:47:20.114477    9108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:47:20.133957    9108 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0610 10:47:20.133957    9108 start.go:494] detecting cgroup driver to use...
	I0610 10:47:20.133957    9108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:47:20.173071    9108 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 10:47:20.185151    9108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 10:47:20.218261    9108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 10:47:20.241577    9108 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 10:47:20.253237    9108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 10:47:20.285725    9108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 10:47:20.320482    9108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 10:47:20.354639    9108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 10:47:20.399392    9108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:47:20.433037    9108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 10:47:20.468522    9108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 10:47:20.504152    9108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 10:47:20.541456    9108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:47:20.560484    9108 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 10:47:20.574593    9108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:47:20.612510    9108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:47:20.895005    9108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 10:47:20.933178    9108 start.go:494] detecting cgroup driver to use...
	I0610 10:47:20.946613    9108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 10:47:20.981474    9108 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 10:47:20.981474    9108 command_runner.go:130] > [Unit]
	I0610 10:47:20.981474    9108 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 10:47:20.981474    9108 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 10:47:20.981474    9108 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 10:47:20.981474    9108 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 10:47:20.981474    9108 command_runner.go:130] > StartLimitBurst=3
	I0610 10:47:20.981474    9108 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 10:47:20.981474    9108 command_runner.go:130] > [Service]
	I0610 10:47:20.982035    9108 command_runner.go:130] > Type=notify
	I0610 10:47:20.982035    9108 command_runner.go:130] > Restart=on-failure
	I0610 10:47:20.982035    9108 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 10:47:20.982035    9108 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 10:47:20.982035    9108 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 10:47:20.982035    9108 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 10:47:20.982035    9108 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 10:47:20.982035    9108 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 10:47:20.982035    9108 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 10:47:20.982035    9108 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 10:47:20.982035    9108 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 10:47:20.982035    9108 command_runner.go:130] > ExecStart=
	I0610 10:47:20.982035    9108 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 10:47:20.982243    9108 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 10:47:20.982243    9108 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 10:47:20.982243    9108 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 10:47:20.982243    9108 command_runner.go:130] > LimitNOFILE=infinity
	I0610 10:47:20.982243    9108 command_runner.go:130] > LimitNPROC=infinity
	I0610 10:47:20.982243    9108 command_runner.go:130] > LimitCORE=infinity
	I0610 10:47:20.982243    9108 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 10:47:20.982243    9108 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 10:47:20.982243    9108 command_runner.go:130] > TasksMax=infinity
	I0610 10:47:20.982243    9108 command_runner.go:130] > TimeoutStartSec=0
	I0610 10:47:20.982243    9108 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 10:47:20.982243    9108 command_runner.go:130] > Delegate=yes
	I0610 10:47:20.982243    9108 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 10:47:20.982243    9108 command_runner.go:130] > KillMode=process
	I0610 10:47:20.982243    9108 command_runner.go:130] > [Install]
	I0610 10:47:20.982243    9108 command_runner.go:130] > WantedBy=multi-user.target
	I0610 10:47:20.996308    9108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:47:21.055336    9108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:47:21.102230    9108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:47:21.142056    9108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 10:47:21.166011    9108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:47:21.201120    9108 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 10:47:21.213350    9108 ssh_runner.go:195] Run: which cri-dockerd
	I0610 10:47:21.220376    9108 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 10:47:21.233474    9108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 10:47:21.260043    9108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 10:47:21.307321    9108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 10:47:21.617267    9108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 10:47:21.884531    9108 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 10:47:21.884870    9108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 10:47:21.932818    9108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:47:22.220495    9108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 10:47:35.147060    9108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.926459s)
	I0610 10:47:35.160666    9108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 10:47:35.202139    9108 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 10:47:35.252675    9108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 10:47:35.296018    9108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 10:47:35.534197    9108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 10:47:35.762888    9108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:47:35.992354    9108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 10:47:36.042316    9108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 10:47:36.081735    9108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:47:36.306088    9108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 10:47:36.450159    9108 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 10:47:36.462900    9108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 10:47:36.476389    9108 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 10:47:36.476389    9108 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 10:47:36.476464    9108 command_runner.go:130] > Device: 0,22	Inode: 1430        Links: 1
	I0610 10:47:36.476464    9108 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 10:47:36.476464    9108 command_runner.go:130] > Access: 2024-06-10 10:47:36.332999420 +0000
	I0610 10:47:36.476464    9108 command_runner.go:130] > Modify: 2024-06-10 10:47:36.332999420 +0000
	I0610 10:47:36.476464    9108 command_runner.go:130] > Change: 2024-06-10 10:47:36.336999330 +0000
	I0610 10:47:36.476464    9108 command_runner.go:130] >  Birth: -
	I0610 10:47:36.476552    9108 start.go:562] Will wait 60s for crictl version
	I0610 10:47:36.488530    9108 ssh_runner.go:195] Run: which crictl
	I0610 10:47:36.497298    9108 command_runner.go:130] > /usr/bin/crictl
	I0610 10:47:36.511130    9108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:47:36.565280    9108 command_runner.go:130] > Version:  0.1.0
	I0610 10:47:36.565317    9108 command_runner.go:130] > RuntimeName:  docker
	I0610 10:47:36.565357    9108 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 10:47:36.565357    9108 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 10:47:36.565357    9108 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 10:47:36.575983    9108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 10:47:36.611117    9108 command_runner.go:130] > 26.1.4
	I0610 10:47:36.621715    9108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 10:47:36.655537    9108 command_runner.go:130] > 26.1.4
	I0610 10:47:36.659396    9108 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 10:47:36.659589    9108 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 10:47:36.663995    9108 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 10:47:36.663995    9108 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 10:47:36.663995    9108 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 10:47:36.663995    9108 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 10:47:36.667806    9108 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 10:47:36.667806    9108 ip.go:210] interface addr: 172.17.144.1/20
	I0610 10:47:36.682590    9108 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 10:47:36.688496    9108 command_runner.go:130] > 172.17.144.1	host.minikube.internal
	I0610 10:47:36.689257    9108 kubeadm.go:877] updating cluster {Name:functional-228600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.1 ClusterName:functional-228600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.144.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:47:36.689257    9108 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 10:47:36.699906    9108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 10:47:36.728476    9108 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 10:47:36.728571    9108 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 10:47:36.728571    9108 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 10:47:36.728571    9108 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 10:47:36.728571    9108 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 10:47:36.728571    9108 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 10:47:36.728571    9108 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 10:47:36.728681    9108 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:47:36.728681    9108 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 10:47:36.728681    9108 docker.go:615] Images already preloaded, skipping extraction
	I0610 10:47:36.738919    9108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 10:47:36.767658    9108 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 10:47:36.767658    9108 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 10:47:36.767658    9108 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 10:47:36.767658    9108 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 10:47:36.767658    9108 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 10:47:36.767658    9108 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 10:47:36.767658    9108 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 10:47:36.767658    9108 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:47:36.767658    9108 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 10:47:36.767658    9108 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:47:36.767658    9108 kubeadm.go:928] updating node { 172.17.144.165 8441 v1.30.1 docker true true} ...
	I0610 10:47:36.767658    9108 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-228600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.144.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-228600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:47:36.777083    9108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 10:47:36.816597    9108 command_runner.go:130] > cgroupfs
	I0610 10:47:36.817183    9108 cni.go:84] Creating CNI manager for ""
	I0610 10:47:36.817294    9108 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:47:36.817356    9108 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:47:36.817474    9108 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.144.165 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-228600 NodeName:functional-228600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.144.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.144.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:47:36.817740    9108 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.144.165
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-228600"
	  kubeletExtraArgs:
	    node-ip: 172.17.144.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.144.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:47:36.831462    9108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:47:36.850861    9108 command_runner.go:130] > kubeadm
	I0610 10:47:36.850861    9108 command_runner.go:130] > kubectl
	I0610 10:47:36.850861    9108 command_runner.go:130] > kubelet
	I0610 10:47:36.850861    9108 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:47:36.863990    9108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 10:47:36.882906    9108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0610 10:47:36.915620    9108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:47:36.948987    9108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0610 10:47:36.995796    9108 ssh_runner.go:195] Run: grep 172.17.144.165	control-plane.minikube.internal$ /etc/hosts
	I0610 10:47:37.002125    9108 command_runner.go:130] > 172.17.144.165	control-plane.minikube.internal
	I0610 10:47:37.015421    9108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:47:37.260792    9108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:47:37.292595    9108 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600 for IP: 172.17.144.165
	I0610 10:47:37.292595    9108 certs.go:194] generating shared ca certs ...
	I0610 10:47:37.292595    9108 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:47:37.293424    9108 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 10:47:37.293840    9108 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 10:47:37.293987    9108 certs.go:256] generating profile certs ...
	I0610 10:47:37.294244    9108 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.key
	I0610 10:47:37.294244    9108 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\apiserver.key.c3c922d2
	I0610 10:47:37.295189    9108 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\proxy-client.key
	I0610 10:47:37.295189    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:47:37.295189    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:47:37.295189    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:47:37.295189    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:47:37.295189    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:47:37.295189    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:47:37.295189    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:47:37.296188    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:47:37.296188    9108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 10:47:37.296188    9108 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 10:47:37.296188    9108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 10:47:37.297174    9108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 10:47:37.297174    9108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 10:47:37.297174    9108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 10:47:37.297174    9108 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 10:47:37.298193    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 10:47:37.298193    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:47:37.298193    9108 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 10:47:37.299295    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:47:37.352942    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:47:37.413318    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:47:37.463598    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 10:47:37.515773    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 10:47:37.564627    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 10:47:37.616749    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:47:37.670729    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 10:47:37.718048    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 10:47:37.772659    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:47:37.827894    9108 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 10:47:37.877606    9108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:47:37.924333    9108 ssh_runner.go:195] Run: openssl version
	I0610 10:47:37.934400    9108 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 10:47:37.946115    9108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 10:47:37.978107    9108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 10:47:37.985247    9108 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 10:47:37.985498    9108 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 10:47:37.997861    9108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 10:47:38.006444    9108 command_runner.go:130] > 51391683
	I0610 10:47:38.020548    9108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 10:47:38.057917    9108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 10:47:38.088594    9108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 10:47:38.096148    9108 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 10:47:38.096148    9108 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 10:47:38.108788    9108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 10:47:38.118572    9108 command_runner.go:130] > 3ec20f2e
	I0610 10:47:38.131637    9108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:47:38.165809    9108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:47:38.200731    9108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:47:38.208126    9108 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:47:38.208374    9108 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:47:38.223259    9108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:47:38.233126    9108 command_runner.go:130] > b5213941
	I0610 10:47:38.245745    9108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:47:38.279784    9108 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:47:38.288794    9108 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:47:38.288794    9108 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0610 10:47:38.288794    9108 command_runner.go:130] > Device: 8,1	Inode: 9431378     Links: 1
	I0610 10:47:38.288794    9108 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 10:47:38.288794    9108 command_runner.go:130] > Access: 2024-06-10 10:44:47.462120463 +0000
	I0610 10:47:38.288890    9108 command_runner.go:130] > Modify: 2024-06-10 10:44:47.462120463 +0000
	I0610 10:47:38.288890    9108 command_runner.go:130] > Change: 2024-06-10 10:44:47.462120463 +0000
	I0610 10:47:38.288890    9108 command_runner.go:130] >  Birth: 2024-06-10 10:44:47.462120463 +0000
	I0610 10:47:38.299884    9108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 10:47:38.311137    9108 command_runner.go:130] > Certificate will not expire
	I0610 10:47:38.324367    9108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 10:47:38.332981    9108 command_runner.go:130] > Certificate will not expire
	I0610 10:47:38.347184    9108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 10:47:38.356176    9108 command_runner.go:130] > Certificate will not expire
	I0610 10:47:38.368335    9108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 10:47:38.377119    9108 command_runner.go:130] > Certificate will not expire
	I0610 10:47:38.389905    9108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 10:47:38.398643    9108 command_runner.go:130] > Certificate will not expire
	I0610 10:47:38.411791    9108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 10:47:38.426319    9108 command_runner.go:130] > Certificate will not expire
	I0610 10:47:38.426319    9108 kubeadm.go:391] StartCluster: {Name:functional-228600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:functional-228600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.144.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:47:38.437123    9108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 10:47:38.477993    9108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 10:47:38.498614    9108 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0610 10:47:38.498683    9108 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0610 10:47:38.498683    9108 command_runner.go:130] > /var/lib/minikube/etcd:
	I0610 10:47:38.498683    9108 command_runner.go:130] > member
	W0610 10:47:38.498767    9108 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 10:47:38.498767    9108 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 10:47:38.498841    9108 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 10:47:38.511501    9108 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 10:47:38.531585    9108 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:47:38.532915    9108 kubeconfig.go:125] found "functional-228600" server: "https://172.17.144.165:8441"
	I0610 10:47:38.534572    9108 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:47:38.535595    9108 kapi.go:59] client config for functional-228600: &rest.Config{Host:"https://172.17.144.165:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228600\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 10:47:38.536821    9108 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 10:47:38.548436    9108 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 10:47:38.566876    9108 kubeadm.go:624] The running cluster does not require reconfiguration: 172.17.144.165
	I0610 10:47:38.566876    9108 kubeadm.go:1154] stopping kube-system containers ...
	I0610 10:47:38.578083    9108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 10:47:38.613055    9108 command_runner.go:130] > 14092496279b
	I0610 10:47:38.613055    9108 command_runner.go:130] > ff07de331b8c
	I0610 10:47:38.613055    9108 command_runner.go:130] > fa6443d913fb
	I0610 10:47:38.613201    9108 command_runner.go:130] > 7e089bf2068a
	I0610 10:47:38.613201    9108 command_runner.go:130] > 0c39dc24bea3
	I0610 10:47:38.613201    9108 command_runner.go:130] > 0011427cb4c7
	I0610 10:47:38.613201    9108 command_runner.go:130] > 7aede8efba30
	I0610 10:47:38.613201    9108 command_runner.go:130] > eb3b161a2f03
	I0610 10:47:38.613241    9108 command_runner.go:130] > ae465579e85a
	I0610 10:47:38.613241    9108 command_runner.go:130] > 50b2017b5891
	I0610 10:47:38.613241    9108 command_runner.go:130] > 6d5a4de94bc9
	I0610 10:47:38.613241    9108 command_runner.go:130] > 4da082049d98
	I0610 10:47:38.613241    9108 command_runner.go:130] > 09f6b305cb4e
	I0610 10:47:38.613241    9108 command_runner.go:130] > 96af56a455cd
	I0610 10:47:38.615101    9108 docker.go:483] Stopping containers: [14092496279b ff07de331b8c fa6443d913fb 7e089bf2068a 0c39dc24bea3 0011427cb4c7 7aede8efba30 eb3b161a2f03 ae465579e85a 50b2017b5891 6d5a4de94bc9 4da082049d98 09f6b305cb4e 96af56a455cd]
	I0610 10:47:38.629051    9108 ssh_runner.go:195] Run: docker stop 14092496279b ff07de331b8c fa6443d913fb 7e089bf2068a 0c39dc24bea3 0011427cb4c7 7aede8efba30 eb3b161a2f03 ae465579e85a 50b2017b5891 6d5a4de94bc9 4da082049d98 09f6b305cb4e 96af56a455cd
	I0610 10:47:38.657593    9108 command_runner.go:130] > 14092496279b
	I0610 10:47:38.657593    9108 command_runner.go:130] > ff07de331b8c
	I0610 10:47:38.657593    9108 command_runner.go:130] > fa6443d913fb
	I0610 10:47:38.657593    9108 command_runner.go:130] > 7e089bf2068a
	I0610 10:47:38.657593    9108 command_runner.go:130] > 0c39dc24bea3
	I0610 10:47:38.657593    9108 command_runner.go:130] > 0011427cb4c7
	I0610 10:47:38.657593    9108 command_runner.go:130] > 7aede8efba30
	I0610 10:47:38.658566    9108 command_runner.go:130] > eb3b161a2f03
	I0610 10:47:38.658566    9108 command_runner.go:130] > ae465579e85a
	I0610 10:47:38.658566    9108 command_runner.go:130] > 50b2017b5891
	I0610 10:47:38.658566    9108 command_runner.go:130] > 6d5a4de94bc9
	I0610 10:47:38.658566    9108 command_runner.go:130] > 4da082049d98
	I0610 10:47:38.658566    9108 command_runner.go:130] > 09f6b305cb4e
	I0610 10:47:38.658566    9108 command_runner.go:130] > 96af56a455cd
	I0610 10:47:38.672443    9108 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 10:47:38.741800    9108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 10:47:38.762331    9108 command_runner.go:130] > -rw------- 1 root root 5647 Jun 10 10:44 /etc/kubernetes/admin.conf
	I0610 10:47:38.762331    9108 command_runner.go:130] > -rw------- 1 root root 5654 Jun 10 10:44 /etc/kubernetes/controller-manager.conf
	I0610 10:47:38.762331    9108 command_runner.go:130] > -rw------- 1 root root 2007 Jun 10 10:44 /etc/kubernetes/kubelet.conf
	I0610 10:47:38.762331    9108 command_runner.go:130] > -rw------- 1 root root 5602 Jun 10 10:44 /etc/kubernetes/scheduler.conf
	I0610 10:47:38.762331    9108 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Jun 10 10:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jun 10 10:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jun 10 10:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jun 10 10:44 /etc/kubernetes/scheduler.conf
	
	I0610 10:47:38.776881    9108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0610 10:47:38.798702    9108 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0610 10:47:38.813398    9108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0610 10:47:38.830956    9108 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0610 10:47:38.843085    9108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0610 10:47:38.862893    9108 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:47:38.877086    9108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 10:47:38.905656    9108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0610 10:47:38.923608    9108 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:47:38.935867    9108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 10:47:38.970963    9108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 10:47:38.990680    9108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 10:47:39.070897    9108 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 10:47:39.071893    9108 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 10:47:39.071931    9108 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 10:47:39.071931    9108 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 10:47:39.071931    9108 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0610 10:47:39.071968    9108 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0610 10:47:39.072001    9108 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0610 10:47:39.072001    9108 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0610 10:47:39.072001    9108 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0610 10:47:39.072001    9108 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 10:47:39.072001    9108 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 10:47:39.072001    9108 command_runner.go:130] > [certs] Using the existing "sa" key
	I0610 10:47:39.072001    9108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 10:47:40.456183    9108 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 10:47:40.456213    9108 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0610 10:47:40.456297    9108 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0610 10:47:40.456297    9108 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0610 10:47:40.456297    9108 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 10:47:40.456352    9108 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 10:47:40.456352    9108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3843392s)
	I0610 10:47:40.456430    9108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 10:47:40.790280    9108 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 10:47:40.790330    9108 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 10:47:40.790330    9108 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 10:47:40.790330    9108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 10:47:40.884828    9108 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 10:47:40.884828    9108 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 10:47:40.884828    9108 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 10:47:40.884828    9108 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 10:47:40.885836    9108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 10:47:41.006571    9108 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 10:47:41.008729    9108 api_server.go:52] waiting for apiserver process to appear ...
	I0610 10:47:41.020476    9108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:47:41.530499    9108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:47:42.025461    9108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:47:42.534883    9108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:47:43.030530    9108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:47:43.057670    9108 command_runner.go:130] > 4907
	I0610 10:47:43.057670    9108 api_server.go:72] duration metric: took 2.049016s to wait for apiserver process to appear ...
	I0610 10:47:43.057670    9108 api_server.go:88] waiting for apiserver healthz status ...
	I0610 10:47:43.057670    9108 api_server.go:253] Checking apiserver healthz at https://172.17.144.165:8441/healthz ...
	I0610 10:47:46.364497    9108 api_server.go:279] https://172.17.144.165:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 10:47:46.364744    9108 api_server.go:103] status: https://172.17.144.165:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 10:47:46.364851    9108 api_server.go:253] Checking apiserver healthz at https://172.17.144.165:8441/healthz ...
	I0610 10:47:46.422253    9108 api_server.go:279] https://172.17.144.165:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 10:47:46.422253    9108 api_server.go:103] status: https://172.17.144.165:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 10:47:46.572183    9108 api_server.go:253] Checking apiserver healthz at https://172.17.144.165:8441/healthz ...
	I0610 10:47:46.583217    9108 api_server.go:279] https://172.17.144.165:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 10:47:46.583279    9108 api_server.go:103] status: https://172.17.144.165:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 10:47:47.073489    9108 api_server.go:253] Checking apiserver healthz at https://172.17.144.165:8441/healthz ...
	I0610 10:47:47.081179    9108 api_server.go:279] https://172.17.144.165:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 10:47:47.081549    9108 api_server.go:103] status: https://172.17.144.165:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 10:47:47.564933    9108 api_server.go:253] Checking apiserver healthz at https://172.17.144.165:8441/healthz ...
	I0610 10:47:47.583784    9108 api_server.go:279] https://172.17.144.165:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 10:47:47.584209    9108 api_server.go:103] status: https://172.17.144.165:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 10:47:48.059212    9108 api_server.go:253] Checking apiserver healthz at https://172.17.144.165:8441/healthz ...
	I0610 10:47:48.066424    9108 api_server.go:279] https://172.17.144.165:8441/healthz returned 200:
	ok
	I0610 10:47:48.067582    9108 round_trippers.go:463] GET https://172.17.144.165:8441/version
	I0610 10:47:48.067582    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:48.067582    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:48.067582    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:48.079268    9108 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 10:47:48.079268    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:48.079268    9108 round_trippers.go:580]     Audit-Id: a3441e22-33e5-4cf2-8813-3c61b510dfc9
	I0610 10:47:48.079268    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:48.079268    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:48.079268    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:48.079268    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:48.079268    9108 round_trippers.go:580]     Content-Length: 263
	I0610 10:47:48.079268    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:48 GMT
	I0610 10:47:48.079268    9108 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 10:47:48.079268    9108 api_server.go:141] control plane version: v1.30.1
	I0610 10:47:48.079268    9108 api_server.go:131] duration metric: took 5.021556s to wait for apiserver health ...
	I0610 10:47:48.079268    9108 cni.go:84] Creating CNI manager for ""
	I0610 10:47:48.079268    9108 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:47:48.082272    9108 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 10:47:48.096269    9108 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 10:47:48.128146    9108 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 10:47:48.204826    9108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 10:47:48.204826    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods
	I0610 10:47:48.204826    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:48.204826    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:48.204826    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:48.214276    9108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 10:47:48.214276    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:48.215258    9108 round_trippers.go:580]     Audit-Id: 80395d2c-7c05-422e-8a11-e39c32d4807d
	I0610 10:47:48.215258    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:48.215280    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:48.215280    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:48.215280    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:48.215280    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:48 GMT
	I0610 10:47:48.216470    9108 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"510"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gzsvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0efe6033-8a4b-4c49-91e0-2f4ba61b5441","resourceVersion":"505","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0be51d35-67ef-4d1a-93c5-af618f589939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0be51d35-67ef-4d1a-93c5-af618f589939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51568 chars]
	I0610 10:47:48.220272    9108 system_pods.go:59] 7 kube-system pods found
	I0610 10:47:48.220272    9108 system_pods.go:61] "coredns-7db6d8ff4d-gzsvv" [0efe6033-8a4b-4c49-91e0-2f4ba61b5441] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 10:47:48.220272    9108 system_pods.go:61] "etcd-functional-228600" [df19256d-9282-42ff-b5ab-75e01e69d744] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 10:47:48.220272    9108 system_pods.go:61] "kube-apiserver-functional-228600" [2e328504-3c20-4c0f-b4ea-d757129cab3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 10:47:48.220272    9108 system_pods.go:61] "kube-controller-manager-functional-228600" [19f10dd4-2205-49b6-a025-f6f3513e7d5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 10:47:48.220272    9108 system_pods.go:61] "kube-proxy-lpfg4" [b3716009-4a8f-457f-9f45-2960743d8939] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0610 10:47:48.220272    9108 system_pods.go:61] "kube-scheduler-functional-228600" [2d8199a5-94d6-4fb7-a16e-3b51e9c63ae9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 10:47:48.220272    9108 system_pods.go:61] "storage-provisioner" [7ddb20ed-d760-437c-90c6-9dfe48efdb1f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0610 10:47:48.220272    9108 system_pods.go:74] duration metric: took 15.4454ms to wait for pod list to return data ...
	I0610 10:47:48.220272    9108 node_conditions.go:102] verifying NodePressure condition ...
	I0610 10:47:48.220272    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes
	I0610 10:47:48.221273    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:48.221273    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:48.221273    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:48.226272    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:48.226794    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:48.226794    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:48.226794    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:48.226879    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:48.226879    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:48 GMT
	I0610 10:47:48.226879    9108 round_trippers.go:580]     Audit-Id: 70e0dadf-6e54-4250-aed3-2b9811f11efe
	I0610 10:47:48.226879    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:48.227144    9108 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"510"},"items":[{"metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0610 10:47:48.227309    9108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:47:48.227309    9108 node_conditions.go:123] node cpu capacity is 2
	I0610 10:47:48.227309    9108 node_conditions.go:105] duration metric: took 7.037ms to run NodePressure ...
	I0610 10:47:48.227309    9108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 10:47:49.045476    9108 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 10:47:49.045596    9108 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 10:47:49.045858    9108 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 10:47:49.046037    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0610 10:47:49.046107    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:49.046107    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:49.046107    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:49.051265    9108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:47:49.051265    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:49.051615    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:49 GMT
	I0610 10:47:49.051615    9108 round_trippers.go:580]     Audit-Id: 78bee415-af37-4455-9f40-872f94931a21
	I0610 10:47:49.051615    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:49.051615    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:49.051615    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:49.051615    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:49.052645    9108 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"515"},"items":[{"metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30988 chars]
	I0610 10:47:49.053277    9108 kubeadm.go:733] kubelet initialised
	I0610 10:47:49.053277    9108 kubeadm.go:734] duration metric: took 7.419ms waiting for restarted kubelet to initialise ...
	I0610 10:47:49.053277    9108 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:47:49.053277    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods
	I0610 10:47:49.053277    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:49.053277    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:49.053277    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:49.062452    9108 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 10:47:49.062606    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:49.062659    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:49.062696    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:49.062696    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:49.062696    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:49 GMT
	I0610 10:47:49.062696    9108 round_trippers.go:580]     Audit-Id: e7bb2124-25a0-47e9-b455-3738e4b9c41f
	I0610 10:47:49.062793    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:49.064102    9108 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"515"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gzsvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0efe6033-8a4b-4c49-91e0-2f4ba61b5441","resourceVersion":"505","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0be51d35-67ef-4d1a-93c5-af618f589939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0be51d35-67ef-4d1a-93c5-af618f589939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51168 chars]
	I0610 10:47:49.066579    9108 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gzsvv" in "kube-system" namespace to be "Ready" ...
	I0610 10:47:49.066579    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gzsvv
	I0610 10:47:49.066579    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:49.066579    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:49.066579    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:49.068208    9108 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 10:47:49.069147    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:49.069210    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:49 GMT
	I0610 10:47:49.069210    9108 round_trippers.go:580]     Audit-Id: 37474385-f210-4e73-bbed-74c4ad19e2c3
	I0610 10:47:49.069210    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:49.069210    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:49.069210    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:49.069210    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:49.069403    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gzsvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0efe6033-8a4b-4c49-91e0-2f4ba61b5441","resourceVersion":"505","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0be51d35-67ef-4d1a-93c5-af618f589939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0be51d35-67ef-4d1a-93c5-af618f589939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0610 10:47:49.070071    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:49.070125    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:49.070125    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:49.070125    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:49.076220    9108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:47:49.076220    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:49.076220    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:49.076220    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:49.076724    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:49 GMT
	I0610 10:47:49.076724    9108 round_trippers.go:580]     Audit-Id: 7f683cf6-cfc1-4d5d-bd55-cd14db3df526
	I0610 10:47:49.076724    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:49.076724    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:49.076819    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:49.576590    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gzsvv
	I0610 10:47:49.576590    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:49.576590    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:49.576590    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:49.580148    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:49.580224    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:49.580224    9108 round_trippers.go:580]     Audit-Id: a8854dfd-9a25-49c4-8906-7986078832c4
	I0610 10:47:49.580224    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:49.580224    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:49.580224    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:49.580224    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:49.580224    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:49 GMT
	I0610 10:47:49.580492    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gzsvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0efe6033-8a4b-4c49-91e0-2f4ba61b5441","resourceVersion":"517","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0be51d35-67ef-4d1a-93c5-af618f589939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0be51d35-67ef-4d1a-93c5-af618f589939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0610 10:47:49.581091    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:49.581224    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:49.581224    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:49.581224    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:49.585142    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:49.585142    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:49.585142    9108 round_trippers.go:580]     Audit-Id: 40e1f73e-545d-45a9-9a27-56a2d11e6675
	I0610 10:47:49.585142    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:49.585142    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:49.585142    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:49.585256    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:49.585256    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:49 GMT
	I0610 10:47:49.585474    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:49.585942    9108 pod_ready.go:92] pod "coredns-7db6d8ff4d-gzsvv" in "kube-system" namespace has status "Ready":"True"
	I0610 10:47:49.585942    9108 pod_ready.go:81] duration metric: took 519.3588ms for pod "coredns-7db6d8ff4d-gzsvv" in "kube-system" namespace to be "Ready" ...
	I0610 10:47:49.586002    9108 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:47:49.586064    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:49.586136    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:49.586171    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:49.586171    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:49.590144    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:49.590144    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:49.590144    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:49.590144    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:49.590144    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:49 GMT
	I0610 10:47:49.590144    9108 round_trippers.go:580]     Audit-Id: 78e5be02-6a5c-4a1f-a208-49ca83d3ff2f
	I0610 10:47:49.590144    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:49.590144    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:49.590144    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:49.591128    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:49.591128    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:49.591128    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:49.591128    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:49.594150    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:49.594150    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:49.594150    9108 round_trippers.go:580]     Audit-Id: 2c4b366d-1ad1-4e07-af78-a7f75b13afe1
	I0610 10:47:49.594150    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:49.594150    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:49.594150    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:49.594150    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:49.594150    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:49 GMT
	I0610 10:47:49.594150    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:50.099232    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:50.099232    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:50.099232    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:50.099232    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:50.102826    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:50.102826    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:50.103687    9108 round_trippers.go:580]     Audit-Id: d19be1fb-e2f9-48b1-97a9-4776ed0d3875
	I0610 10:47:50.103687    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:50.103687    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:50.103687    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:50.103687    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:50.103687    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:50 GMT
	I0610 10:47:50.103839    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:50.104664    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:50.104723    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:50.104723    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:50.104723    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:50.107952    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:50.108437    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:50.108437    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:50.108437    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:50 GMT
	I0610 10:47:50.108437    9108 round_trippers.go:580]     Audit-Id: 951f1d28-82b6-4110-9402-bb271b576913
	I0610 10:47:50.108437    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:50.108437    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:50.108437    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:50.108877    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:50.588964    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:50.588964    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:50.589042    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:50.589042    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:50.593291    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:50.593417    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:50.593417    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:50.593417    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:50.593417    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:50.593417    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:50 GMT
	I0610 10:47:50.593417    9108 round_trippers.go:580]     Audit-Id: 1b696227-d748-46cc-bb87-2e9d7b4d8376
	I0610 10:47:50.593479    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:50.593698    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:50.594468    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:50.594468    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:50.594468    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:50.594545    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:50.610392    9108 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0610 10:47:50.611423    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:50.611423    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:50.611423    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:50.611423    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:50 GMT
	I0610 10:47:50.611423    9108 round_trippers.go:580]     Audit-Id: b0e0529d-ccfb-40d8-8239-6bbd6a30b2d2
	I0610 10:47:50.611423    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:50.611423    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:50.611878    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:51.089839    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:51.089839    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:51.089839    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:51.089839    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:51.093531    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:51.094036    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:51.094036    9108 round_trippers.go:580]     Audit-Id: 18801fa1-6bcd-48fc-9aa4-8617bfecf416
	I0610 10:47:51.094036    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:51.094036    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:51.094036    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:51.094036    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:51.094036    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:51 GMT
	I0610 10:47:51.094343    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:51.095038    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:51.095038    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:51.095138    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:51.095138    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:51.098432    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:51.098565    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:51.098565    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:51.098565    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:51.098565    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:51 GMT
	I0610 10:47:51.098565    9108 round_trippers.go:580]     Audit-Id: 382189fe-caf7-4e7d-8a6a-e6fada6d6c90
	I0610 10:47:51.098636    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:51.098636    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:51.099143    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:51.590415    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:51.590415    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:51.590415    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:51.590415    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:51.594167    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:51.594167    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:51.594167    9108 round_trippers.go:580]     Audit-Id: 2d8eef4e-3b59-41c6-a9d0-3b002f187877
	I0610 10:47:51.594167    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:51.594167    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:51.594167    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:51.594167    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:51.594884    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:51 GMT
	I0610 10:47:51.595208    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:51.595843    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:51.595843    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:51.595843    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:51.595843    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:51.600117    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:51.600117    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:51.600117    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:51.600117    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:51.600117    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:51 GMT
	I0610 10:47:51.600117    9108 round_trippers.go:580]     Audit-Id: 59a8d88f-d431-41c1-8bbb-edc1f7ded133
	I0610 10:47:51.600117    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:51.600117    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:51.600920    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:51.601429    9108 pod_ready.go:102] pod "etcd-functional-228600" in "kube-system" namespace has status "Ready":"False"
	I0610 10:47:52.089209    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:52.089209    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:52.089209    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:52.089209    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:52.093781    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:52.094744    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:52.094792    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:52.094792    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:52.094792    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:52.094792    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:52.094792    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:52 GMT
	I0610 10:47:52.094792    9108 round_trippers.go:580]     Audit-Id: 7f5b25dc-3201-461a-9d0f-6c7f31c4ea73
	I0610 10:47:52.095656    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:52.096191    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:52.096340    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:52.096340    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:52.096340    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:52.099641    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:52.099641    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:52.099691    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:52 GMT
	I0610 10:47:52.099691    9108 round_trippers.go:580]     Audit-Id: 12ab2d77-31a8-4359-9480-a212522022fe
	I0610 10:47:52.099691    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:52.099691    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:52.099691    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:52.099691    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:52.099997    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:52.591078    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:52.591162    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:52.591162    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:52.591162    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:52.595646    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:52.595646    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:52.596543    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:52.596543    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:52.596543    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:52.596601    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:52 GMT
	I0610 10:47:52.596601    9108 round_trippers.go:580]     Audit-Id: a4f20842-33aa-4cbd-9c16-32d98904ccf4
	I0610 10:47:52.596601    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:52.596601    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:52.597346    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:52.597346    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:52.597346    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:52.597346    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:52.600934    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:52.601251    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:52.601251    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:52.601251    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:52.601251    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:52.601251    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:52.601251    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:52 GMT
	I0610 10:47:52.601251    9108 round_trippers.go:580]     Audit-Id: e9832ac1-7a56-4ff1-8fc0-a23ace5f4c6f
	I0610 10:47:52.601675    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:53.092326    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:53.092326    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:53.092326    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:53.092326    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:53.099471    9108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:47:53.099471    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:53.099471    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:53.099471    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:53.099471    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:53.099471    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:53.099471    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:53 GMT
	I0610 10:47:53.099471    9108 round_trippers.go:580]     Audit-Id: 6d8680e6-22eb-4f82-8e6b-e5e3a9f0a007
	I0610 10:47:53.100136    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:53.100549    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:53.100549    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:53.100549    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:53.100549    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:53.103288    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:47:53.104187    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:53.104187    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:53.104187    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:53.104187    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:53.104187    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:53 GMT
	I0610 10:47:53.104187    9108 round_trippers.go:580]     Audit-Id: b83b6af9-b4da-4d33-b2ed-2b2e4c432b4b
	I0610 10:47:53.104187    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:53.105425    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:53.590118    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:53.590155    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:53.590155    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:53.590155    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:53.594031    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:53.594474    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:53.594474    9108 round_trippers.go:580]     Audit-Id: edd44746-9cd9-4c6d-acbf-f70147767c60
	I0610 10:47:53.594474    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:53.594474    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:53.594474    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:53.594474    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:53.594474    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:53 GMT
	I0610 10:47:53.595392    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:53.595925    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:53.595925    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:53.595925    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:53.595925    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:53.598531    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:47:53.598531    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:53.598531    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:53.599471    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:53.599471    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:53.599471    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:53.599471    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:53 GMT
	I0610 10:47:53.599471    9108 round_trippers.go:580]     Audit-Id: 5c69f92f-bf37-4ab0-bad6-2f9f94fc6704
	I0610 10:47:53.599877    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:54.089918    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:54.089918    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:54.089918    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:54.089918    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:54.094921    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:54.094921    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:54.094921    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:54.094921    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:54 GMT
	I0610 10:47:54.094921    9108 round_trippers.go:580]     Audit-Id: dad06d9c-d76c-49c9-829d-8846107dc9ec
	I0610 10:47:54.094921    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:54.094921    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:54.094921    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:54.094921    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:54.095777    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:54.095844    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:54.095844    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:54.095844    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:54.098544    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:47:54.098544    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:54.098544    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:54.098544    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:54 GMT
	I0610 10:47:54.098544    9108 round_trippers.go:580]     Audit-Id: b22cd82f-24b9-4a9d-b441-fd85000e4de3
	I0610 10:47:54.098544    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:54.098544    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:54.098544    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:54.099119    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:54.099845    9108 pod_ready.go:102] pod "etcd-functional-228600" in "kube-system" namespace has status "Ready":"False"
	I0610 10:47:54.588422    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:54.588422    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:54.588422    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:54.588422    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:54.592263    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:54.593035    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:54.593035    9108 round_trippers.go:580]     Audit-Id: a6cd0362-8105-46fa-8a09-69c20f0943fc
	I0610 10:47:54.593035    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:54.593035    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:54.593035    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:54.593035    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:54.593035    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:54 GMT
	I0610 10:47:54.593245    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:54.593897    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:54.593897    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:54.594103    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:54.594103    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:54.596329    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:47:54.596329    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:54.597275    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:54.597275    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:54.597275    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:54.597275    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:54 GMT
	I0610 10:47:54.597275    9108 round_trippers.go:580]     Audit-Id: 9055f6f5-6911-4e3c-ab56-ab4987c168f8
	I0610 10:47:54.597275    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:54.597727    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:55.086912    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:55.087031    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:55.087031    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:55.087031    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:55.091180    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:55.091180    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:55.091180    9108 round_trippers.go:580]     Audit-Id: 4ae07f98-2e51-4c05-94e4-b42265db6a21
	I0610 10:47:55.091180    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:55.091180    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:55.091180    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:55.091180    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:55.091180    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:55 GMT
	I0610 10:47:55.091974    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:55.092701    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:55.092701    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:55.092768    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:55.092768    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:55.096146    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:55.096245    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:55.096245    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:55.096245    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:55 GMT
	I0610 10:47:55.096245    9108 round_trippers.go:580]     Audit-Id: 55380485-c418-4f73-a17d-ae7feb0a2dc4
	I0610 10:47:55.096245    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:55.096245    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:55.096245    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:55.096598    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:55.587428    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:55.587428    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:55.587428    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:55.587428    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:55.593494    9108 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:47:55.593831    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:55.593831    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:55.593831    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:55.593831    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:55.593831    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:55 GMT
	I0610 10:47:55.593831    9108 round_trippers.go:580]     Audit-Id: fa5e73b4-0c85-4850-9ab5-182d4d8de72b
	I0610 10:47:55.593831    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:55.593831    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:55.595028    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:55.595028    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:55.595028    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:55.595028    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:55.598419    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:55.598419    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:55.598419    9108 round_trippers.go:580]     Audit-Id: a9e94fa0-5819-462a-a2d8-c36788768db0
	I0610 10:47:55.598419    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:55.598419    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:55.598419    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:55.598419    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:55.598419    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:55 GMT
	I0610 10:47:55.598709    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:56.088845    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:56.088904    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:56.088904    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:56.088978    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:56.092722    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:56.092722    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:56.092722    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:56.092722    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:56.092722    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:56.092722    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:56.092722    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:56 GMT
	I0610 10:47:56.092722    9108 round_trippers.go:580]     Audit-Id: e1517232-f982-4faa-bd47-6332f5c57e79
	I0610 10:47:56.093564    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:56.093827    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:56.093827    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:56.093827    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:56.094373    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:56.097904    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:56.097904    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:56.097904    9108 round_trippers.go:580]     Audit-Id: f6a8cf29-d122-4125-b6e6-41e3e7279512
	I0610 10:47:56.097904    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:56.097904    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:56.097904    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:56.097904    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:56.097904    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:56 GMT
	I0610 10:47:56.098094    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:56.588052    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:56.588052    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:56.588052    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:56.588052    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:56.592515    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:56.592515    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:56.592588    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:56.592588    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:56.592588    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:56 GMT
	I0610 10:47:56.592588    9108 round_trippers.go:580]     Audit-Id: c9442708-0bc6-4305-b08a-c3081de60c8d
	I0610 10:47:56.592621    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:56.592621    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:56.592727    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:56.593642    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:56.593642    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:56.593740    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:56.593740    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:56.598682    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:56.598682    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:56.598682    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:56.599334    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:56 GMT
	I0610 10:47:56.599334    9108 round_trippers.go:580]     Audit-Id: ab6429eb-10b5-4576-9327-9e584628c209
	I0610 10:47:56.599334    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:56.599334    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:56.599334    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:56.599558    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:56.600083    9108 pod_ready.go:102] pod "etcd-functional-228600" in "kube-system" namespace has status "Ready":"False"
	I0610 10:47:57.089766    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:57.089766    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:57.089924    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:57.089924    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:57.094439    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:57.094439    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:57.094439    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:57.094439    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:57.094439    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:57 GMT
	I0610 10:47:57.094439    9108 round_trippers.go:580]     Audit-Id: 6f933ffa-31d6-4da3-851e-fda7efa124a8
	I0610 10:47:57.094439    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:57.094439    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:57.095026    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:57.096105    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:57.096105    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:57.096105    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:57.096105    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:57.101065    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:57.101065    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:57.101065    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:57.101065    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:57.101065    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:57.101065    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:57 GMT
	I0610 10:47:57.101065    9108 round_trippers.go:580]     Audit-Id: 587d04be-caf9-4a8e-9172-99b6934e97f9
	I0610 10:47:57.101065    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:57.101065    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:57.593054    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:57.593054    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:57.593199    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:57.593199    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:57.597494    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:57.597494    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:57.597494    9108 round_trippers.go:580]     Audit-Id: d934ccfa-43d8-4e4a-acc3-ed3cc2134acd
	I0610 10:47:57.597494    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:57.597761    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:57.597761    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:57.597761    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:57.597761    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:57 GMT
	I0610 10:47:57.597995    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:57.598632    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:57.598685    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:57.598685    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:57.598685    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:57.601721    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:57.601721    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:57.601721    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:57.601721    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:57 GMT
	I0610 10:47:57.601721    9108 round_trippers.go:580]     Audit-Id: ff2973d8-7db4-416b-8772-ebafe8a08e84
	I0610 10:47:57.601721    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:57.601721    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:57.602169    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:57.602483    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:58.090784    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:58.090784    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:58.090784    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:58.090784    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:58.098125    9108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:47:58.098125    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:58.098125    9108 round_trippers.go:580]     Audit-Id: 1b43b7f6-d5d1-4af2-a0fc-fdd51a7c3b54
	I0610 10:47:58.098125    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:58.098609    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:58.098609    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:58.098660    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:58.098660    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:58 GMT
	I0610 10:47:58.098831    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:58.099392    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:58.099392    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:58.099392    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:58.099392    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:58.101966    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:47:58.102963    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:58.102963    9108 round_trippers.go:580]     Audit-Id: 41525297-9380-417f-8eb4-1999086a2fdc
	I0610 10:47:58.102963    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:58.102963    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:58.102963    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:58.102963    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:58.102963    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:58 GMT
	I0610 10:47:58.102963    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:58.590049    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:58.590049    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:58.590049    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:58.590049    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:58.594925    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:58.594925    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:58.594925    9108 round_trippers.go:580]     Audit-Id: 472640d5-dd7a-4944-80e1-eb7cc963aee2
	I0610 10:47:58.595077    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:58.595077    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:58.595077    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:58.595077    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:58.595077    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:58 GMT
	I0610 10:47:58.595248    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:58.596214    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:58.596214    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:58.596214    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:58.596214    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:58.601278    9108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:47:58.601278    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:58.601278    9108 round_trippers.go:580]     Audit-Id: 31811903-103a-41ce-80ac-5dcf22257a93
	I0610 10:47:58.601278    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:58.601278    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:58.601278    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:58.601278    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:58.601278    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:58 GMT
	I0610 10:47:58.601900    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:58.601900    9108 pod_ready.go:102] pod "etcd-functional-228600" in "kube-system" namespace has status "Ready":"False"
	I0610 10:47:59.089123    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:59.089123    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:59.089194    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:59.089194    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:59.092147    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:47:59.092593    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:59.092593    9108 round_trippers.go:580]     Audit-Id: 90e4954f-e2ab-4857-9802-3c0d28c48535
	I0610 10:47:59.092593    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:59.092593    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:59.092593    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:59.092593    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:59.092717    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:59 GMT
	I0610 10:47:59.092950    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:59.093360    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:59.093360    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:59.093360    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:59.093360    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:59.096099    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:47:59.096408    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:59.096408    9108 round_trippers.go:580]     Audit-Id: 25279bc4-c59d-4dac-a504-a1a36b308086
	I0610 10:47:59.096408    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:59.096408    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:59.096408    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:59.096408    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:59.096408    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:59 GMT
	I0610 10:47:59.099083    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:47:59.593021    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:47:59.593157    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:59.593157    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:59.593157    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:59.597766    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:47:59.597766    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:59.597890    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:59.597890    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:59.597890    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:59.597890    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:59 GMT
	I0610 10:47:59.597890    9108 round_trippers.go:580]     Audit-Id: 70021618-8beb-4388-bf37-f0642c9c35f5
	I0610 10:47:59.597890    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:59.598216    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"506","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0610 10:47:59.598521    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:47:59.598521    9108 round_trippers.go:469] Request Headers:
	I0610 10:47:59.598521    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:47:59.598521    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:47:59.601696    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:47:59.601696    9108 round_trippers.go:577] Response Headers:
	I0610 10:47:59.601696    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:47:59.601696    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:47:59 GMT
	I0610 10:47:59.601696    9108 round_trippers.go:580]     Audit-Id: 106f1697-098f-4244-a19c-759519b6973c
	I0610 10:47:59.601696    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:47:59.601696    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:47:59.601696    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:47:59.602552    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:00.088601    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:48:00.088601    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:00.088601    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:00.088601    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:00.093065    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:00.093065    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:00.093065    9108 round_trippers.go:580]     Audit-Id: b67d102d-95df-4750-827e-f5174f9af223
	I0610 10:48:00.093065    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:00.093065    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:00.093065    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:00.093065    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:00.093065    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:00 GMT
	I0610 10:48:00.094304    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"583","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6384 chars]
	I0610 10:48:00.095235    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:00.095235    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:00.095235    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:00.095235    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:00.097861    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:48:00.097861    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:00.098882    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:00 GMT
	I0610 10:48:00.098905    9108 round_trippers.go:580]     Audit-Id: 1419e444-9669-49ae-8b9b-d6128e5e6da5
	I0610 10:48:00.098905    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:00.098905    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:00.098905    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:00.098905    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:00.098972    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:00.099587    9108 pod_ready.go:92] pod "etcd-functional-228600" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:00.099587    9108 pod_ready.go:81] duration metric: took 10.5134984s for pod "etcd-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:00.099656    9108 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:00.099724    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-228600
	I0610 10:48:00.099902    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:00.099902    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:00.099902    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:00.102270    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:48:00.103258    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:00.103258    9108 round_trippers.go:580]     Audit-Id: cf670b0e-baf2-4e0a-ad61-dd4823b84319
	I0610 10:48:00.103258    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:00.103258    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:00.103258    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:00.103258    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:00.103258    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:00 GMT
	I0610 10:48:00.103977    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-228600","namespace":"kube-system","uid":"2e328504-3c20-4c0f-b4ea-d757129cab3e","resourceVersion":"507","creationTimestamp":"2024-06-10T10:44:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.144.165:8441","kubernetes.io/config.hash":"619be567118d40fa56a65cb809758762","kubernetes.io/config.mirror":"619be567118d40fa56a65cb809758762","kubernetes.io/config.seen":"2024-06-10T10:44:52.443727597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:44:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0610 10:48:00.104787    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:00.104861    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:00.104861    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:00.104861    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:00.108013    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:48:00.108428    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:00.108428    9108 round_trippers.go:580]     Audit-Id: 24d64937-574e-4b1f-a504-aba44a88b85c
	I0610 10:48:00.108428    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:00.108468    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:00.108468    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:00.108468    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:00.108468    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:00 GMT
	I0610 10:48:00.108645    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:00.601828    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-228600
	I0610 10:48:00.601909    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:00.601909    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:00.601909    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:00.605950    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:00.606141    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:00.606141    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:00.606141    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:00.606141    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:00 GMT
	I0610 10:48:00.606276    9108 round_trippers.go:580]     Audit-Id: a8a5e77c-9d76-4864-8ed1-f54c4d43bc58
	I0610 10:48:00.606276    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:00.606276    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:00.606529    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-228600","namespace":"kube-system","uid":"2e328504-3c20-4c0f-b4ea-d757129cab3e","resourceVersion":"507","creationTimestamp":"2024-06-10T10:44:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.144.165:8441","kubernetes.io/config.hash":"619be567118d40fa56a65cb809758762","kubernetes.io/config.mirror":"619be567118d40fa56a65cb809758762","kubernetes.io/config.seen":"2024-06-10T10:44:52.443727597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:44:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0610 10:48:00.607429    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:00.607429    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:00.607429    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:00.607496    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:00.609777    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:48:00.609777    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:00.609777    9108 round_trippers.go:580]     Audit-Id: 7ecd79a3-9f6e-4047-84d0-4a8a6de80da9
	I0610 10:48:00.610258    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:00.610258    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:00.610258    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:00.610258    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:00.610258    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:00 GMT
	I0610 10:48:00.610425    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:01.100809    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-228600
	I0610 10:48:01.100880    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.100947    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.100947    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.106788    9108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:48:01.106788    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.107626    9108 round_trippers.go:580]     Audit-Id: fce43302-1245-4123-a3f7-e23ba58f4ba6
	I0610 10:48:01.107626    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.107626    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.107626    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.107626    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.107626    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.108239    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-228600","namespace":"kube-system","uid":"2e328504-3c20-4c0f-b4ea-d757129cab3e","resourceVersion":"507","creationTimestamp":"2024-06-10T10:44:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.144.165:8441","kubernetes.io/config.hash":"619be567118d40fa56a65cb809758762","kubernetes.io/config.mirror":"619be567118d40fa56a65cb809758762","kubernetes.io/config.seen":"2024-06-10T10:44:52.443727597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:44:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0610 10:48:01.109082    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:01.109082    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.109082    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.109082    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.112580    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:48:01.112666    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.112666    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.112666    9108 round_trippers.go:580]     Audit-Id: ec60b12b-8afc-4024-8043-d4f98d64b51a
	I0610 10:48:01.112666    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.112764    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.112780    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.112780    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.112859    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:01.603816    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-228600
	I0610 10:48:01.603816    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.604024    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.604024    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.609540    9108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:48:01.610077    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.610077    9108 round_trippers.go:580]     Audit-Id: fac2617a-f0c2-4248-9a87-10ff70f6cb4e
	I0610 10:48:01.610077    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.610077    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.610077    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.610077    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.610077    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.610422    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-228600","namespace":"kube-system","uid":"2e328504-3c20-4c0f-b4ea-d757129cab3e","resourceVersion":"585","creationTimestamp":"2024-06-10T10:44:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.144.165:8441","kubernetes.io/config.hash":"619be567118d40fa56a65cb809758762","kubernetes.io/config.mirror":"619be567118d40fa56a65cb809758762","kubernetes.io/config.seen":"2024-06-10T10:44:52.443727597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:44:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7914 chars]
	I0610 10:48:01.611102    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:01.611102    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.611102    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.611102    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.614748    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:48:01.614748    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.614748    9108 round_trippers.go:580]     Audit-Id: 6f3559eb-7d4c-405c-99f0-291d5f2fa6e2
	I0610 10:48:01.614748    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.614748    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.614748    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.614748    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.614748    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.615901    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:01.616402    9108 pod_ready.go:92] pod "kube-apiserver-functional-228600" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:01.616484    9108 pod_ready.go:81] duration metric: took 1.516816s for pod "kube-apiserver-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:01.616484    9108 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:01.616639    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-228600
	I0610 10:48:01.616700    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.616736    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.616736    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.619732    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:48:01.619732    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.619732    9108 round_trippers.go:580]     Audit-Id: dd788b30-d40d-4b3e-bf42-9cd0aad04db7
	I0610 10:48:01.619732    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.619732    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.619732    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.619732    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.619732    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.620533    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-228600","namespace":"kube-system","uid":"19f10dd4-2205-49b6-a025-f6f3513e7d5e","resourceVersion":"572","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b9a93a86b606d665586374c0a9782363","kubernetes.io/config.mirror":"b9a93a86b606d665586374c0a9782363","kubernetes.io/config.seen":"2024-06-10T10:45:00.101736293Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0610 10:48:01.620993    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:01.620993    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.620993    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.620993    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.623225    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:48:01.623225    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.623225    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.623225    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.623225    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.623225    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.623225    9108 round_trippers.go:580]     Audit-Id: ad7ffc13-54f4-4e5e-aec6-cb456da25044
	I0610 10:48:01.623225    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.623225    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:01.624699    9108 pod_ready.go:92] pod "kube-controller-manager-functional-228600" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:01.624699    9108 pod_ready.go:81] duration metric: took 8.2151ms for pod "kube-controller-manager-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:01.624699    9108 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lpfg4" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:01.624846    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lpfg4
	I0610 10:48:01.624846    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.624846    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.624846    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.629688    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:01.629795    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.629795    9108 round_trippers.go:580]     Audit-Id: f92978c1-e70d-4dd3-94ff-5071b085612e
	I0610 10:48:01.629795    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.629795    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.629795    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.629795    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.629857    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.629882    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lpfg4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b3716009-4a8f-457f-9f45-2960743d8939","resourceVersion":"512","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5de54ee9-64ec-49bc-9516-ead2a0d6840f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5de54ee9-64ec-49bc-9516-ead2a0d6840f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0610 10:48:01.630563    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:01.630563    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.630563    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.630563    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.633782    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:48:01.633782    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.633782    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.633782    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.633782    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.633782    9108 round_trippers.go:580]     Audit-Id: 9df38144-847f-46f4-814a-bef2a20417c2
	I0610 10:48:01.633782    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.633782    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.633782    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:01.633782    9108 pod_ready.go:92] pod "kube-proxy-lpfg4" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:01.634656    9108 pod_ready.go:81] duration metric: took 9.9562ms for pod "kube-proxy-lpfg4" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:01.634656    9108 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:01.634737    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-228600
	I0610 10:48:01.634806    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.634806    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.634841    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.637069    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:48:01.637069    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.637069    9108 round_trippers.go:580]     Audit-Id: db54c6cd-36bd-4264-8d72-d4e7e3594bd9
	I0610 10:48:01.637069    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.637069    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.637069    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.637069    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.637069    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.638105    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-228600","namespace":"kube-system","uid":"2d8199a5-94d6-4fb7-a16e-3b51e9c63ae9","resourceVersion":"576","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dc58f836bdfce0483193e6cf4246d8d3","kubernetes.io/config.mirror":"dc58f836bdfce0483193e6cf4246d8d3","kubernetes.io/config.seen":"2024-06-10T10:45:00.101729692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0610 10:48:01.638105    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:01.638105    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:01.638105    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:01.638105    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:01.640987    9108 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:48:01.640987    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:01.640987    9108 round_trippers.go:580]     Audit-Id: e72edd50-b0b3-461f-90b6-094f3fbbbcf4
	I0610 10:48:01.641447    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:01.641447    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:01.641447    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:01.641447    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:01.641538    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:01 GMT
	I0610 10:48:01.641894    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:01.642147    9108 pod_ready.go:92] pod "kube-scheduler-functional-228600" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:01.642147    9108 pod_ready.go:81] duration metric: took 7.4914ms for pod "kube-scheduler-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:01.642147    9108 pod_ready.go:38] duration metric: took 12.5887666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:48:01.642147    9108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 10:48:01.670895    9108 command_runner.go:130] > -16
	I0610 10:48:01.671027    9108 ops.go:34] apiserver oom_adj: -16
	I0610 10:48:01.671027    9108 kubeadm.go:591] duration metric: took 23.171996s to restartPrimaryControlPlane
	I0610 10:48:01.671027    9108 kubeadm.go:393] duration metric: took 23.2445178s to StartCluster
	I0610 10:48:01.671199    9108 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:48:01.671431    9108 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:48:01.672920    9108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:48:01.674363    9108 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.144.165 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:48:01.674363    9108 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 10:48:01.680281    9108 out.go:177] * Verifying Kubernetes components...
	I0610 10:48:01.674363    9108 addons.go:69] Setting storage-provisioner=true in profile "functional-228600"
	I0610 10:48:01.674363    9108 addons.go:69] Setting default-storageclass=true in profile "functional-228600"
	I0610 10:48:01.674363    9108 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 10:48:01.680533    9108 addons.go:234] Setting addon storage-provisioner=true in "functional-228600"
	W0610 10:48:01.680593    9108 addons.go:243] addon storage-provisioner should already be in state true
	I0610 10:48:01.680593    9108 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-228600"
	I0610 10:48:01.680662    9108 host.go:66] Checking if "functional-228600" exists ...
	I0610 10:48:01.681281    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:48:01.682147    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:48:01.696731    9108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:48:01.999386    9108 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:48:02.025762    9108 node_ready.go:35] waiting up to 6m0s for node "functional-228600" to be "Ready" ...
	I0610 10:48:02.025833    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:02.025833    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:02.025833    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:02.025833    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:02.030424    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:02.030460    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:02.030460    9108 round_trippers.go:580]     Audit-Id: 4e162bbe-7d5a-4d6d-a452-f902bf92cfb3
	I0610 10:48:02.030460    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:02.030460    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:02.030460    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:02.030460    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:02.030460    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:02 GMT
	I0610 10:48:02.030879    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:02.031535    9108 node_ready.go:49] node "functional-228600" has status "Ready":"True"
	I0610 10:48:02.031605    9108 node_ready.go:38] duration metric: took 5.7716ms for node "functional-228600" to be "Ready" ...
	I0610 10:48:02.031605    9108 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:48:02.031760    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods
	I0610 10:48:02.031760    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:02.031887    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:02.031887    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:02.035681    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:48:02.035681    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:02.035681    9108 round_trippers.go:580]     Audit-Id: fdf9855c-89d9-49a0-9323-838f5607a4ab
	I0610 10:48:02.035681    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:02.035681    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:02.035681    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:02.035681    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:02.035681    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:02 GMT
	I0610 10:48:02.037109    9108 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"585"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gzsvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0efe6033-8a4b-4c49-91e0-2f4ba61b5441","resourceVersion":"517","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0be51d35-67ef-4d1a-93c5-af618f589939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0be51d35-67ef-4d1a-93c5-af618f589939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50141 chars]
	I0610 10:48:02.041686    9108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gzsvv" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:02.042356    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gzsvv
	I0610 10:48:02.042356    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:02.042356    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:02.042356    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:02.047951    9108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:48:02.048316    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:02.048316    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:02 GMT
	I0610 10:48:02.048316    9108 round_trippers.go:580]     Audit-Id: 32b958ac-ced1-4edc-90cc-5075ec61355a
	I0610 10:48:02.048316    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:02.048316    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:02.048316    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:02.048316    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:02.048701    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gzsvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0efe6033-8a4b-4c49-91e0-2f4ba61b5441","resourceVersion":"517","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0be51d35-67ef-4d1a-93c5-af618f589939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0be51d35-67ef-4d1a-93c5-af618f589939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0610 10:48:02.093353    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:02.093353    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:02.093353    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:02.093353    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:02.096943    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:48:02.097853    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:02.097929    9108 round_trippers.go:580]     Audit-Id: cc0625c8-2708-4b9e-a930-67d54fc96f84
	I0610 10:48:02.097929    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:02.097929    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:02.097929    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:02.097998    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:02.097998    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:02 GMT
	I0610 10:48:02.098494    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:02.099197    9108 pod_ready.go:92] pod "coredns-7db6d8ff4d-gzsvv" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:02.099314    9108 pod_ready.go:81] duration metric: took 57.0931ms for pod "coredns-7db6d8ff4d-gzsvv" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:02.099314    9108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:02.297910    9108 request.go:629] Waited for 198.2252ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:48:02.298189    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/etcd-functional-228600
	I0610 10:48:02.298189    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:02.298263    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:02.298263    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:02.303808    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:02.303867    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:02.303867    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:02 GMT
	I0610 10:48:02.303925    9108 round_trippers.go:580]     Audit-Id: fb917223-5bb8-4860-9d50-0dd6e6d740d9
	I0610 10:48:02.304012    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:02.304012    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:02.304012    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:02.304012    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:02.304340    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-228600","namespace":"kube-system","uid":"df19256d-9282-42ff-b5ab-75e01e69d744","resourceVersion":"583","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.144.165:2379","kubernetes.io/config.hash":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.mirror":"43c0ef62bd04621b5e62e4a76e3bf4cd","kubernetes.io/config.seen":"2024-06-10T10:45:00.101733693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6384 chars]
	I0610 10:48:02.489327    9108 request.go:629] Waited for 183.9325ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:02.489391    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:02.489391    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:02.489391    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:02.489391    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:02.493981    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:02.493981    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:02.493981    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:02.493981    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:02 GMT
	I0610 10:48:02.493981    9108 round_trippers.go:580]     Audit-Id: fb26291c-f8f6-42ce-af9c-940b91f6a40d
	I0610 10:48:02.493981    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:02.493981    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:02.493981    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:02.493981    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:02.493981    9108 pod_ready.go:92] pod "etcd-functional-228600" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:02.493981    9108 pod_ready.go:81] duration metric: took 394.663ms for pod "etcd-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:02.493981    9108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:02.695016    9108 request.go:629] Waited for 201.0338ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-228600
	I0610 10:48:02.695405    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-228600
	I0610 10:48:02.695533    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:02.695559    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:02.695559    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:02.699564    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:02.699727    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:02.699727    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:02.699844    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:02.699844    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:02.699844    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:02.699844    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:02 GMT
	I0610 10:48:02.699844    9108 round_trippers.go:580]     Audit-Id: 0c5989eb-d3e7-43a8-b9cd-bad902a50876
	I0610 10:48:02.700621    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-228600","namespace":"kube-system","uid":"2e328504-3c20-4c0f-b4ea-d757129cab3e","resourceVersion":"585","creationTimestamp":"2024-06-10T10:44:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.144.165:8441","kubernetes.io/config.hash":"619be567118d40fa56a65cb809758762","kubernetes.io/config.mirror":"619be567118d40fa56a65cb809758762","kubernetes.io/config.seen":"2024-06-10T10:44:52.443727597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:44:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7914 chars]
	I0610 10:48:02.901713    9108 request.go:629] Waited for 200.2079ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:02.901713    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:02.901846    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:02.901846    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:02.901846    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:02.907339    9108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:48:02.907339    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:02.907888    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:02.907888    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:02 GMT
	I0610 10:48:02.907888    9108 round_trippers.go:580]     Audit-Id: c0f42b23-d22f-479f-8557-35164a79a538
	I0610 10:48:02.907888    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:02.907888    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:02.907888    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:02.908028    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:02.908694    9108 pod_ready.go:92] pod "kube-apiserver-functional-228600" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:02.908694    9108 pod_ready.go:81] duration metric: took 414.71ms for pod "kube-apiserver-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:02.908802    9108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:03.091527    9108 request.go:629] Waited for 182.5141ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-228600
	I0610 10:48:03.091672    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-228600
	I0610 10:48:03.091672    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:03.091672    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:03.091868    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:03.097943    9108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:48:03.097943    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:03.097943    9108 round_trippers.go:580]     Audit-Id: fd3be124-a9e8-4851-99a2-50c41ede40aa
	I0610 10:48:03.097943    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:03.097943    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:03.097943    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:03.097943    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:03.097943    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:03 GMT
	I0610 10:48:03.098299    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-228600","namespace":"kube-system","uid":"19f10dd4-2205-49b6-a025-f6f3513e7d5e","resourceVersion":"572","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b9a93a86b606d665586374c0a9782363","kubernetes.io/config.mirror":"b9a93a86b606d665586374c0a9782363","kubernetes.io/config.seen":"2024-06-10T10:45:00.101736293Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0610 10:48:03.296620    9108 request.go:629] Waited for 197.4213ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:03.296741    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:03.296741    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:03.296741    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:03.296874    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:03.301547    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:03.302443    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:03.302443    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:03.302443    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:03.302443    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:03.302559    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:03 GMT
	I0610 10:48:03.302559    9108 round_trippers.go:580]     Audit-Id: 6e62a6f6-5619-4009-b709-ece4622b1447
	I0610 10:48:03.302595    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:03.303456    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:03.304256    9108 pod_ready.go:92] pod "kube-controller-manager-functional-228600" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:03.304256    9108 pod_ready.go:81] duration metric: took 395.4513ms for pod "kube-controller-manager-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:03.304349    9108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lpfg4" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:03.488651    9108 request.go:629] Waited for 184.1889ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lpfg4
	I0610 10:48:03.488876    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lpfg4
	I0610 10:48:03.488939    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:03.488939    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:03.488939    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:03.492610    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:48:03.493556    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:03.493585    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:03.493585    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:03.493585    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:03 GMT
	I0610 10:48:03.493585    9108 round_trippers.go:580]     Audit-Id: 5b12479f-e2af-4ee6-bcfe-1c2977f2649c
	I0610 10:48:03.493585    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:03.493585    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:03.493960    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lpfg4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b3716009-4a8f-457f-9f45-2960743d8939","resourceVersion":"512","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5de54ee9-64ec-49bc-9516-ead2a0d6840f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5de54ee9-64ec-49bc-9516-ead2a0d6840f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0610 10:48:03.694795    9108 request.go:629] Waited for 199.7686ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:03.694795    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:03.694795    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:03.695071    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:03.695071    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:03.699532    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:03.699532    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:03.699701    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:03.699701    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:03.699701    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:03 GMT
	I0610 10:48:03.699701    9108 round_trippers.go:580]     Audit-Id: f0a519b1-f29c-4c0b-846b-215b9e0a6ea4
	I0610 10:48:03.699701    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:03.699701    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:03.699701    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:03.700802    9108 pod_ready.go:92] pod "kube-proxy-lpfg4" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:03.700802    9108 pod_ready.go:81] duration metric: took 396.4493ms for pod "kube-proxy-lpfg4" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:03.700925    9108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:03.901263    9108 request.go:629] Waited for 200.1948ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-228600
	I0610 10:48:03.901501    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-228600
	I0610 10:48:03.901501    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:03.901501    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:03.901501    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:03.907225    9108 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:48:03.907225    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:03.907225    9108 round_trippers.go:580]     Audit-Id: 8997efdb-790d-4e05-9ea9-fd2b8e0a5723
	I0610 10:48:03.907225    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:03.907225    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:03.907225    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:03.907225    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:03.907225    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:03 GMT
	I0610 10:48:03.907225    9108 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-228600","namespace":"kube-system","uid":"2d8199a5-94d6-4fb7-a16e-3b51e9c63ae9","resourceVersion":"576","creationTimestamp":"2024-06-10T10:45:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dc58f836bdfce0483193e6cf4246d8d3","kubernetes.io/config.mirror":"dc58f836bdfce0483193e6cf4246d8d3","kubernetes.io/config.seen":"2024-06-10T10:45:00.101729692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0610 10:48:04.091195    9108 request.go:629] Waited for 182.7842ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:04.091195    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:48:04.091195    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:48:04.091195    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes/functional-228600
	I0610 10:48:04.091195    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:04.091195    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:04.091195    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:04.094926    9108 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:48:04.095545    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:04.097925    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:04.097925    9108 round_trippers.go:580]     Audit-Id: 77e53237-524c-4b1c-b975-8bc33f5c3312
	I0610 10:48:04.097925    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:04.097925    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:04.097925    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:04.097925    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:04.097925    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:04 GMT
	I0610 10:48:04.098128    9108 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:48:04.098128    9108 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 10:48:04.098257    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:48:04.098257    9108 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-10T10:44:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0610 10:48:04.098257    9108 pod_ready.go:92] pod "kube-scheduler-functional-228600" in "kube-system" namespace has status "Ready":"True"
	I0610 10:48:04.098257    9108 pod_ready.go:81] duration metric: took 397.3285ms for pod "kube-scheduler-functional-228600" in "kube-system" namespace to be "Ready" ...
	I0610 10:48:04.098793    9108 pod_ready.go:38] duration metric: took 2.0665536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:48:04.098793    9108 api_server.go:52] waiting for apiserver process to appear ...
	I0610 10:48:04.103556    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:48:04.103556    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:48:04.104953    9108 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:48:04.105452    9108 kapi.go:59] client config for functional-228600: &rest.Config{Host:"https://172.17.144.165:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228600\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 10:48:04.105452    9108 addons.go:234] Setting addon default-storageclass=true in "functional-228600"
	W0610 10:48:04.105452    9108 addons.go:243] addon default-storageclass should already be in state true
	I0610 10:48:04.105452    9108 host.go:66] Checking if "functional-228600" exists ...
	I0610 10:48:04.105452    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:48:04.116519    9108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:48:04.150198    9108 command_runner.go:130] > 4907
	I0610 10:48:04.150198    9108 api_server.go:72] duration metric: took 2.4758153s to wait for apiserver process to appear ...
	I0610 10:48:04.150198    9108 api_server.go:88] waiting for apiserver healthz status ...
	I0610 10:48:04.150198    9108 api_server.go:253] Checking apiserver healthz at https://172.17.144.165:8441/healthz ...
	I0610 10:48:04.159341    9108 api_server.go:279] https://172.17.144.165:8441/healthz returned 200:
	ok
	I0610 10:48:04.159341    9108 round_trippers.go:463] GET https://172.17.144.165:8441/version
	I0610 10:48:04.159341    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:04.159341    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:04.159341    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:04.160330    9108 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 10:48:04.160330    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:04.160330    9108 round_trippers.go:580]     Content-Length: 263
	I0610 10:48:04.160330    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:04 GMT
	I0610 10:48:04.160330    9108 round_trippers.go:580]     Audit-Id: 8b648459-4aac-400d-9cc3-62b22e4274bb
	I0610 10:48:04.160330    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:04.160330    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:04.160330    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:04.160330    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:04.161332    9108 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 10:48:04.161332    9108 api_server.go:141] control plane version: v1.30.1
	I0610 10:48:04.161332    9108 api_server.go:131] duration metric: took 11.1339ms to wait for apiserver health ...
	I0610 10:48:04.161332    9108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 10:48:04.298565    9108 request.go:629] Waited for 136.9983ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods
	I0610 10:48:04.298656    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods
	I0610 10:48:04.298656    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:04.298656    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:04.298656    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:04.306248    9108 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:48:04.306947    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:04.306947    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:04.306947    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:04.306947    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:04.307084    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:04.307084    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:04 GMT
	I0610 10:48:04.307084    9108 round_trippers.go:580]     Audit-Id: 2522fab1-f989-41ef-9729-3453e3517055
	I0610 10:48:04.308777    9108 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"585"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gzsvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0efe6033-8a4b-4c49-91e0-2f4ba61b5441","resourceVersion":"517","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0be51d35-67ef-4d1a-93c5-af618f589939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0be51d35-67ef-4d1a-93c5-af618f589939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50141 chars]
	I0610 10:48:04.312878    9108 system_pods.go:59] 7 kube-system pods found
	I0610 10:48:04.312878    9108 system_pods.go:61] "coredns-7db6d8ff4d-gzsvv" [0efe6033-8a4b-4c49-91e0-2f4ba61b5441] Running
	I0610 10:48:04.312878    9108 system_pods.go:61] "etcd-functional-228600" [df19256d-9282-42ff-b5ab-75e01e69d744] Running
	I0610 10:48:04.312878    9108 system_pods.go:61] "kube-apiserver-functional-228600" [2e328504-3c20-4c0f-b4ea-d757129cab3e] Running
	I0610 10:48:04.312878    9108 system_pods.go:61] "kube-controller-manager-functional-228600" [19f10dd4-2205-49b6-a025-f6f3513e7d5e] Running
	I0610 10:48:04.312878    9108 system_pods.go:61] "kube-proxy-lpfg4" [b3716009-4a8f-457f-9f45-2960743d8939] Running
	I0610 10:48:04.312878    9108 system_pods.go:61] "kube-scheduler-functional-228600" [2d8199a5-94d6-4fb7-a16e-3b51e9c63ae9] Running
	I0610 10:48:04.312878    9108 system_pods.go:61] "storage-provisioner" [7ddb20ed-d760-437c-90c6-9dfe48efdb1f] Running
	I0610 10:48:04.312878    9108 system_pods.go:74] duration metric: took 151.5446ms to wait for pod list to return data ...
	I0610 10:48:04.312878    9108 default_sa.go:34] waiting for default service account to be created ...
	I0610 10:48:04.504417    9108 request.go:629] Waited for 191.3846ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/namespaces/default/serviceaccounts
	I0610 10:48:04.504695    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/default/serviceaccounts
	I0610 10:48:04.504764    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:04.504764    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:04.504764    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:04.509570    9108 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:48:04.509570    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:04.509642    9108 round_trippers.go:580]     Content-Length: 261
	I0610 10:48:04.509642    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:04 GMT
	I0610 10:48:04.509642    9108 round_trippers.go:580]     Audit-Id: 3ec9e010-455c-44b0-aeaa-d2674b7c0b16
	I0610 10:48:04.509642    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:04.509642    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:04.509642    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:04.509642    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:04.509642    9108 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"585"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"433abd9c-ec41-46c7-b3e3-44ee2a80cf2f","resourceVersion":"294","creationTimestamp":"2024-06-10T10:45:13Z"}}]}
	I0610 10:48:04.509642    9108 default_sa.go:45] found service account: "default"
	I0610 10:48:04.509642    9108 default_sa.go:55] duration metric: took 196.7627ms for default service account to be created ...
	I0610 10:48:04.509642    9108 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 10:48:04.694243    9108 request.go:629] Waited for 183.9887ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods
	I0610 10:48:04.694461    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/namespaces/kube-system/pods
	I0610 10:48:04.694461    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:04.694547    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:04.694547    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:04.705847    9108 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 10:48:04.706692    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:04.706770    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:04.706770    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:04.706770    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:04 GMT
	I0610 10:48:04.706770    9108 round_trippers.go:580]     Audit-Id: 12259dc9-3c94-46a0-8d03-4a4fd90aaacf
	I0610 10:48:04.706770    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:04.706770    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:04.708826    9108 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"585"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gzsvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0efe6033-8a4b-4c49-91e0-2f4ba61b5441","resourceVersion":"517","creationTimestamp":"2024-06-10T10:45:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0be51d35-67ef-4d1a-93c5-af618f589939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T10:45:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0be51d35-67ef-4d1a-93c5-af618f589939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50141 chars]
	I0610 10:48:04.715496    9108 system_pods.go:86] 7 kube-system pods found
	I0610 10:48:04.716110    9108 system_pods.go:89] "coredns-7db6d8ff4d-gzsvv" [0efe6033-8a4b-4c49-91e0-2f4ba61b5441] Running
	I0610 10:48:04.716110    9108 system_pods.go:89] "etcd-functional-228600" [df19256d-9282-42ff-b5ab-75e01e69d744] Running
	I0610 10:48:04.716638    9108 system_pods.go:89] "kube-apiserver-functional-228600" [2e328504-3c20-4c0f-b4ea-d757129cab3e] Running
	I0610 10:48:04.716638    9108 system_pods.go:89] "kube-controller-manager-functional-228600" [19f10dd4-2205-49b6-a025-f6f3513e7d5e] Running
	I0610 10:48:04.716680    9108 system_pods.go:89] "kube-proxy-lpfg4" [b3716009-4a8f-457f-9f45-2960743d8939] Running
	I0610 10:48:04.716680    9108 system_pods.go:89] "kube-scheduler-functional-228600" [2d8199a5-94d6-4fb7-a16e-3b51e9c63ae9] Running
	I0610 10:48:04.716680    9108 system_pods.go:89] "storage-provisioner" [7ddb20ed-d760-437c-90c6-9dfe48efdb1f] Running
	I0610 10:48:04.716680    9108 system_pods.go:126] duration metric: took 207.0356ms to wait for k8s-apps to be running ...
	I0610 10:48:04.716680    9108 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 10:48:04.731206    9108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:48:04.765650    9108 system_svc.go:56] duration metric: took 48.9697ms WaitForService to wait for kubelet
	I0610 10:48:04.765650    9108 kubeadm.go:576] duration metric: took 3.0912618s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:48:04.765797    9108 node_conditions.go:102] verifying NodePressure condition ...
	I0610 10:48:04.898922    9108 request.go:629] Waited for 133.0606ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.144.165:8441/api/v1/nodes
	I0610 10:48:04.899323    9108 round_trippers.go:463] GET https://172.17.144.165:8441/api/v1/nodes
	I0610 10:48:04.899323    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:04.899425    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:04.899425    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:04.903775    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:48:04.903853    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:04.903853    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:04.903853    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:04.903853    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:04.903853    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:04 GMT
	I0610 10:48:04.903853    9108 round_trippers.go:580]     Audit-Id: 1ebfb2cb-af22-4d87-8941-c361230f33f6
	I0610 10:48:04.903853    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:04.903853    9108 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"585"},"items":[{"metadata":{"name":"functional-228600","uid":"b8fc1f03-7a36-4fe9-889f-9f1aadf091df","resourceVersion":"499","creationTimestamp":"2024-06-10T10:44:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-228600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"functional-228600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T10_45_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0610 10:48:04.904602    9108 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:48:04.904602    9108 node_conditions.go:123] node cpu capacity is 2
	I0610 10:48:04.904602    9108 node_conditions.go:105] duration metric: took 138.8041ms to run NodePressure ...
	I0610 10:48:04.904602    9108 start.go:240] waiting for startup goroutines ...
	I0610 10:48:06.518641    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:48:06.518842    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:48:06.518946    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:48:06.549625    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:48:06.550478    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:48:06.550582    9108 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 10:48:06.550607    9108 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 10:48:06.550722    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
	I0610 10:48:08.924039    9108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 10:48:08.924039    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:48:08.924651    9108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
	I0610 10:48:09.338378    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:48:09.338378    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:48:09.338378    9108 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
	I0610 10:48:09.485899    9108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:48:10.426999    9108 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0610 10:48:10.426999    9108 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0610 10:48:10.426999    9108 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0610 10:48:10.426999    9108 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0610 10:48:10.426999    9108 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0610 10:48:10.426999    9108 command_runner.go:130] > pod/storage-provisioner configured
	I0610 10:48:11.732632    9108 main.go:141] libmachine: [stdout =====>] : 172.17.144.165
	
	I0610 10:48:11.733013    9108 main.go:141] libmachine: [stderr =====>] : 
	I0610 10:48:11.733278    9108 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
	I0610 10:48:11.882823    9108 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 10:48:12.059010    9108 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0610 10:48:12.059107    9108 round_trippers.go:463] GET https://172.17.144.165:8441/apis/storage.k8s.io/v1/storageclasses
	I0610 10:48:12.059107    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:12.059107    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:12.059107    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:12.062692    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:48:12.062692    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:12.062692    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:12.062692    9108 round_trippers.go:580]     Content-Length: 1273
	I0610 10:48:12.063085    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:12 GMT
	I0610 10:48:12.063085    9108 round_trippers.go:580]     Audit-Id: 06c1d101-a080-4596-a773-425f4eacd862
	I0610 10:48:12.063085    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:12.063085    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:12.063085    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:12.063152    9108 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"593"},"items":[{"metadata":{"name":"standard","uid":"083dc3a8-f4b1-4209-8760-6bb68920b525","resourceVersion":"392","creationTimestamp":"2024-06-10T10:45:24Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T10:45:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0610 10:48:12.063833    9108 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"083dc3a8-f4b1-4209-8760-6bb68920b525","resourceVersion":"392","creationTimestamp":"2024-06-10T10:45:24Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T10:45:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 10:48:12.063833    9108 round_trippers.go:463] PUT https://172.17.144.165:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 10:48:12.063833    9108 round_trippers.go:469] Request Headers:
	I0610 10:48:12.063833    9108 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:48:12.063833    9108 round_trippers.go:473]     Content-Type: application/json
	I0610 10:48:12.063833    9108 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 10:48:12.067685    9108 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:48:12.067685    9108 round_trippers.go:577] Response Headers:
	I0610 10:48:12.067685    9108 round_trippers.go:580]     Audit-Id: 661c3790-de6f-4a91-8edf-afe7b429132f
	I0610 10:48:12.067685    9108 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 10:48:12.068672    9108 round_trippers.go:580]     Content-Type: application/json
	I0610 10:48:12.068695    9108 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 456bf91d-7ff4-4528-8caa-3987526a01ba
	I0610 10:48:12.068695    9108 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c34234f8-5edc-4136-895e-e8666dd5a97e
	I0610 10:48:12.068695    9108 round_trippers.go:580]     Content-Length: 1220
	I0610 10:48:12.068695    9108 round_trippers.go:580]     Date: Mon, 10 Jun 2024 10:48:12 GMT
	I0610 10:48:12.068880    9108 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"083dc3a8-f4b1-4209-8760-6bb68920b525","resourceVersion":"392","creationTimestamp":"2024-06-10T10:45:24Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T10:45:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 10:48:12.075450    9108 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 10:48:12.079437    9108 addons.go:510] duration metric: took 10.4049887s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0610 10:48:12.079437    9108 start.go:245] waiting for cluster config update ...
	I0610 10:48:12.079437    9108 start.go:254] writing updated cluster config ...
	I0610 10:48:12.091457    9108 ssh_runner.go:195] Run: rm -f paused
	I0610 10:48:12.235885    9108 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 10:48:12.239461    9108 out.go:177] * Done! kubectl is now configured to use "functional-228600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.553036916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.553158915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.619180492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.619288491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.619322491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.619472590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.646495717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.646889914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.647111013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:47 functional-228600 dockerd[4038]: time="2024-06-10T10:47:47.647507710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:47 functional-228600 cri-dockerd[4275]: time="2024-06-10T10:47:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/77690eac9cfaa32b72ddf2174917d9c6b13206fcb4676055129f6d08270cbc21/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 10:47:47 functional-228600 cri-dockerd[4275]: time="2024-06-10T10:47:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7784d5b22b5271e1467827b2fcb96531f00fa82f19d70a33f4950e7edfd8beee/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.035588156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.036753850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.036935349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.037123248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.039932833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.040487630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.041140926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.042044721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:48 functional-228600 cri-dockerd[4275]: time="2024-06-10T10:47:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/662869c9cf1aed12e841b7d8cea146f8c55712faf4d8014459b98f16bf764d6a/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.588186211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.588849908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.589014807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 10:47:48 functional-228600 dockerd[4038]: time="2024-06-10T10:47:48.589450005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	027998edd5e35       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   662869c9cf1ae       coredns-7db6d8ff4d-gzsvv
	72e6770780a53       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   7784d5b22b527       storage-provisioner
	0bb1e42f5d9dc       747097150317f       2 minutes ago       Running             kube-proxy                1                   77690eac9cfaa       kube-proxy-lpfg4
	7c855a318ebbc       a52dc94f0a912       2 minutes ago       Running             kube-scheduler            1                   37accb1333e32       kube-scheduler-functional-228600
	f6b53ef5fa0c1       3861cfcd7c04c       2 minutes ago       Running             etcd                      1                   c283b8018aeef       etcd-functional-228600
	cd2f85aa6f80b       91be940803172       2 minutes ago       Running             kube-apiserver            1                   1b50371d97c7a       kube-apiserver-functional-228600
	ffc6b1f98b369       25a1387cdab82       2 minutes ago       Running             kube-controller-manager   1                   2d548ca13fcdf       kube-controller-manager-functional-228600
	14092496279ba       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   ff07de331b8c4       storage-provisioner
	fa6443d913fba       747097150317f       4 minutes ago       Exited              kube-proxy                0                   0c39dc24bea37       kube-proxy-lpfg4
	7e089bf2068a9       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   0011427cb4c7f       coredns-7db6d8ff4d-gzsvv
	7aede8efba307       a52dc94f0a912       5 minutes ago       Exited              kube-scheduler            0                   6d5a4de94bc9f       kube-scheduler-functional-228600
	eb3b161a2f039       25a1387cdab82       5 minutes ago       Exited              kube-controller-manager   0                   09f6b305cb4e3       kube-controller-manager-functional-228600
	ae465579e85a3       91be940803172       5 minutes ago       Exited              kube-apiserver            0                   4da082049d989       kube-apiserver-functional-228600
	50b2017b58919       3861cfcd7c04c       5 minutes ago       Exited              etcd                      0                   96af56a455cd5       etcd-functional-228600
	
	
	==> coredns [027998edd5e3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42804 - 40396 "HINFO IN 7272582867922641040.1775284051065205770. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04813512s
	
	
	==> coredns [7e089bf2068a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53778 - 5024 "HINFO IN 5426993520302427551.619901120727833200. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.056807751s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2050269385]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Jun-2024 10:45:16.727) (total time: 30001ms):
	Trace[2050269385]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:45:46.728)
	Trace[2050269385]: [30.001624222s] [30.001624222s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1104725898]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Jun-2024 10:45:16.727) (total time: 30001ms):
	Trace[1104725898]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:45:46.729)
	Trace[1104725898]: [30.001484279s] [30.001484279s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2116024038]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (10-Jun-2024 10:45:16.728) (total time: 30001ms):
	Trace[2116024038]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:45:46.729)
	Trace[2116024038]: [30.001634167s] [30.001634167s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-228600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-228600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=functional-228600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_45_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:44:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-228600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:49:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:49:18 +0000   Mon, 10 Jun 2024 10:44:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:49:18 +0000   Mon, 10 Jun 2024 10:44:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:49:18 +0000   Mon, 10 Jun 2024 10:44:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:49:18 +0000   Mon, 10 Jun 2024 10:45:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.144.165
	  Hostname:    functional-228600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9aec967e57e4e3fbe078f744a555d2c
	  System UUID:                94f9ed84-2c24-cc46-a1a3-1b64a32663a6
	  Boot ID:                    f7b7b438-dec1-4212-9859-ad5438a4f9ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-gzsvv                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m50s
	  kube-system                 etcd-functional-228600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-apiserver-functional-228600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-functional-228600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-proxy-lpfg4                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-scheduler-functional-228600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m47s                  kube-proxy       
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m4s                   kubelet          Node functional-228600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s                   kubelet          Node functional-228600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s                   kubelet          Node functional-228600 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m4s                   kubelet          Starting kubelet.
	  Normal  NodeReady                5m                     kubelet          Node functional-228600 status is now: NodeReady
	  Normal  RegisteredNode           4m51s                  node-controller  Node functional-228600 event: Registered Node functional-228600 in Controller
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m23s (x8 over 2m23s)  kubelet          Node functional-228600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s (x8 over 2m23s)  kubelet          Node functional-228600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s (x7 over 2m23s)  kubelet          Node functional-228600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m5s                   node-controller  Node functional-228600 event: Registered Node functional-228600 in Controller
	
	
	==> dmesg <==
	[  +5.396883] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.719251] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +6.572365] systemd-fstab-generator[1721]: Ignoring "noauto" option for root device
	[  +0.108589] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.540644] systemd-fstab-generator[2128]: Ignoring "noauto" option for root device
	[  +0.133612] kauditd_printk_skb: 62 callbacks suppressed
	[Jun10 10:45] systemd-fstab-generator[2345]: Ignoring "noauto" option for root device
	[  +0.259156] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.645808] kauditd_printk_skb: 69 callbacks suppressed
	[Jun10 10:47] systemd-fstab-generator[3570]: Ignoring "noauto" option for root device
	[  +0.721732] systemd-fstab-generator[3603]: Ignoring "noauto" option for root device
	[  +0.298396] systemd-fstab-generator[3615]: Ignoring "noauto" option for root device
	[  +0.302672] systemd-fstab-generator[3629]: Ignoring "noauto" option for root device
	[  +5.404743] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.954959] systemd-fstab-generator[4224]: Ignoring "noauto" option for root device
	[  +0.243797] systemd-fstab-generator[4236]: Ignoring "noauto" option for root device
	[  +0.211787] systemd-fstab-generator[4248]: Ignoring "noauto" option for root device
	[  +0.329305] systemd-fstab-generator[4263]: Ignoring "noauto" option for root device
	[  +0.949452] systemd-fstab-generator[4419]: Ignoring "noauto" option for root device
	[  +3.531810] systemd-fstab-generator[4536]: Ignoring "noauto" option for root device
	[  +0.119127] kauditd_printk_skb: 139 callbacks suppressed
	[  +6.508096] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.983848] kauditd_printk_skb: 31 callbacks suppressed
	[Jun10 10:48] systemd-fstab-generator[5443]: Ignoring "noauto" option for root device
	[ +43.818405] hrtimer: interrupt took 2244329 ns
	
	
	==> etcd [50b2017b5891] <==
	{"level":"info","ts":"2024-06-10T10:44:53.982296Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"66c61b8910ea6150","initial-advertise-peer-urls":["https://172.17.144.165:2380"],"listen-peer-urls":["https://172.17.144.165:2380"],"advertise-client-urls":["https://172.17.144.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.144.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-10T10:44:53.982319Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-10T10:44:53.98246Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.144.165:2380"}
	{"level":"info","ts":"2024-06-10T10:44:53.982479Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.144.165:2380"}
	{"level":"info","ts":"2024-06-10T10:44:53.991157Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:44:53.998575Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"66c61b8910ea6150","local-member-attributes":"{Name:functional-228600 ClientURLs:[https://172.17.144.165:2379]}","request-path":"/0/members/66c61b8910ea6150/attributes","cluster-id":"79fe682f1094acaf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T10:44:53.998824Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T10:44:53.999486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T10:44:54.00193Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T10:44:54.001953Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T10:44:54.002086Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"79fe682f1094acaf","local-member-id":"66c61b8910ea6150","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:44:54.002153Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:44:54.002174Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:44:54.005001Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T10:44:54.008818Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.144.165:2379"}
	{"level":"info","ts":"2024-06-10T10:47:22.392218Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-10T10:47:22.392275Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-228600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.17.144.165:2380"],"advertise-client-urls":["https://172.17.144.165:2379"]}
	{"level":"warn","ts":"2024-06-10T10:47:22.392407Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T10:47:22.39258Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T10:47:22.450279Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.17.144.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T10:47:22.450313Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.17.144.165:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-10T10:47:22.450391Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"66c61b8910ea6150","current-leader-member-id":"66c61b8910ea6150"}
	{"level":"info","ts":"2024-06-10T10:47:22.465417Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.17.144.165:2380"}
	{"level":"info","ts":"2024-06-10T10:47:22.46564Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.17.144.165:2380"}
	{"level":"info","ts":"2024-06-10T10:47:22.465688Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-228600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.17.144.165:2380"],"advertise-client-urls":["https://172.17.144.165:2379"]}
	
	
	==> etcd [f6b53ef5fa0c] <==
	{"level":"info","ts":"2024-06-10T10:47:42.919163Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T10:47:42.919187Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T10:47:42.922822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66c61b8910ea6150 switched to configuration voters=(7405636912765624656)"}
	{"level":"info","ts":"2024-06-10T10:47:42.923499Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"79fe682f1094acaf","local-member-id":"66c61b8910ea6150","added-peer-id":"66c61b8910ea6150","added-peer-peer-urls":["https://172.17.144.165:2380"]}
	{"level":"info","ts":"2024-06-10T10:47:42.925672Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"79fe682f1094acaf","local-member-id":"66c61b8910ea6150","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:47:42.925847Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:47:42.934165Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-10T10:47:42.943583Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"66c61b8910ea6150","initial-advertise-peer-urls":["https://172.17.144.165:2380"],"listen-peer-urls":["https://172.17.144.165:2380"],"advertise-client-urls":["https://172.17.144.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.144.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-10T10:47:42.944035Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.144.165:2380"}
	{"level":"info","ts":"2024-06-10T10:47:42.953424Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.144.165:2380"}
	{"level":"info","ts":"2024-06-10T10:47:42.943772Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-10T10:47:44.714422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66c61b8910ea6150 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-10T10:47:44.714483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66c61b8910ea6150 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-10T10:47:44.714937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66c61b8910ea6150 received MsgPreVoteResp from 66c61b8910ea6150 at term 2"}
	{"level":"info","ts":"2024-06-10T10:47:44.715145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66c61b8910ea6150 became candidate at term 3"}
	{"level":"info","ts":"2024-06-10T10:47:44.715429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66c61b8910ea6150 received MsgVoteResp from 66c61b8910ea6150 at term 3"}
	{"level":"info","ts":"2024-06-10T10:47:44.715639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66c61b8910ea6150 became leader at term 3"}
	{"level":"info","ts":"2024-06-10T10:47:44.716025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 66c61b8910ea6150 elected leader 66c61b8910ea6150 at term 3"}
	{"level":"info","ts":"2024-06-10T10:47:44.720503Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"66c61b8910ea6150","local-member-attributes":"{Name:functional-228600 ClientURLs:[https://172.17.144.165:2379]}","request-path":"/0/members/66c61b8910ea6150/attributes","cluster-id":"79fe682f1094acaf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T10:47:44.721211Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T10:47:44.721582Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T10:47:44.721361Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T10:47:44.72139Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T10:47:44.726594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T10:47:44.725321Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.144.165:2379"}
	
	
	==> kernel <==
	 10:50:04 up 7 min,  0 users,  load average: 0.29, 0.41, 0.21
	Linux functional-228600 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae465579e85a] <==
	W0610 10:47:31.613142       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.648462       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.678207       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.724812       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.778961       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.822896       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.854391       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.854979       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.878520       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.942441       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.946907       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.954504       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:31.971508       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.056212       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.094681       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.107859       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.129783       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.150696       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.150936       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.156775       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.175535       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.290031       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.313214       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.333054       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 10:47:32.345070       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [cd2f85aa6f80] <==
	I0610 10:47:46.488676       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 10:47:46.489151       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 10:47:46.490058       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 10:47:46.490238       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 10:47:46.491223       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 10:47:46.491649       1 aggregator.go:165] initial CRD sync complete...
	I0610 10:47:46.491820       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 10:47:46.491938       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 10:47:46.492087       1 cache.go:39] Caches are synced for autoregister controller
	I0610 10:47:46.496831       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 10:47:46.499075       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 10:47:46.499785       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 10:47:46.526488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 10:47:46.532717       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 10:47:46.532859       1 policy_source.go:224] refreshing policies
	I0610 10:47:46.541897       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 10:47:47.306059       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0610 10:47:47.936697       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.144.165]
	I0610 10:47:47.940115       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 10:47:47.959512       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 10:47:48.794647       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 10:47:48.845309       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 10:47:48.941740       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 10:47:49.013952       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 10:47:49.025475       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [eb3b161a2f03] <==
	I0610 10:45:13.248492       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 10:45:13.277673       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 10:45:13.361765       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 10:45:13.373780       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 10:45:13.398800       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 10:45:13.410662       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 10:45:13.413588       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 10:45:13.416560       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 10:45:13.418528       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 10:45:13.876216       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 10:45:13.876258       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 10:45:13.968142       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 10:45:14.390389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="743.696544ms"
	I0610 10:45:14.425087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.638367ms"
	I0610 10:45:14.425530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="351.536µs"
	I0610 10:45:14.454346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.21µs"
	I0610 10:45:14.571921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.012353ms"
	I0610 10:45:14.587684       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.81902ms"
	I0610 10:45:14.588822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.703µs"
	I0610 10:45:16.826336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.807µs"
	I0610 10:45:16.841372       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.905µs"
	I0610 10:45:16.847839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.009µs"
	I0610 10:45:17.898319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.603µs"
	I0610 10:45:55.645120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.896316ms"
	I0610 10:45:55.645672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.501µs"
	
	
	==> kube-controller-manager [ffc6b1f98b36] <==
	I0610 10:47:59.221208       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 10:47:59.221563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.302µs"
	I0610 10:47:59.222039       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 10:47:59.225017       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 10:47:59.226740       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 10:47:59.231235       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 10:47:59.231634       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 10:47:59.231902       1 shared_informer.go:320] Caches are synced for HPA
	I0610 10:47:59.235024       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 10:47:59.237314       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 10:47:59.245653       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 10:47:59.254228       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 10:47:59.255575       1 shared_informer.go:320] Caches are synced for GC
	I0610 10:47:59.267099       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 10:47:59.300073       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 10:47:59.318899       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 10:47:59.350544       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 10:47:59.383748       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 10:47:59.389264       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 10:47:59.391656       1 shared_informer.go:320] Caches are synced for disruption
	I0610 10:47:59.451004       1 shared_informer.go:320] Caches are synced for namespace
	I0610 10:47:59.459391       1 shared_informer.go:320] Caches are synced for service account
	I0610 10:47:59.873805       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 10:47:59.903450       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 10:47:59.903689       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0bb1e42f5d9d] <==
	I0610 10:47:48.420259       1 server_linux.go:69] "Using iptables proxy"
	I0610 10:47:48.478128       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.144.165"]
	I0610 10:47:48.586451       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:47:48.586518       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:47:48.586537       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:47:48.591856       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:47:48.592032       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:47:48.592046       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:47:48.601313       1 config.go:192] "Starting service config controller"
	I0610 10:47:48.601411       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:47:48.601455       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:47:48.601463       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:47:48.602417       1 config.go:319] "Starting node config controller"
	I0610 10:47:48.602504       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 10:47:48.702553       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:47:48.702841       1 shared_informer.go:320] Caches are synced for node config
	I0610 10:47:48.702855       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [fa6443d913fb] <==
	I0610 10:45:16.890358       1 server_linux.go:69] "Using iptables proxy"
	I0610 10:45:16.904066       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.144.165"]
	I0610 10:45:16.958482       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:45:16.958521       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:45:16.958540       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:45:16.962831       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:45:16.963875       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:45:16.963916       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:45:16.965752       1 config.go:192] "Starting service config controller"
	I0610 10:45:16.965791       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:45:16.965818       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:45:16.965824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:45:16.966402       1 config.go:319] "Starting node config controller"
	I0610 10:45:16.966522       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 10:45:17.066671       1 shared_informer.go:320] Caches are synced for node config
	I0610 10:45:17.066705       1 shared_informer.go:320] Caches are synced for service config
	I0610 10:45:17.066828       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7aede8efba30] <==
	W0610 10:44:57.928943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 10:44:57.929069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 10:44:57.996513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 10:44:57.996673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 10:44:58.055142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 10:44:58.055181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 10:44:58.081867       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 10:44:58.081963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 10:44:58.139568       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 10:44:58.139618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 10:44:58.201220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 10:44:58.201282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 10:44:58.241157       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:44:58.241205       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 10:44:58.313859       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:44:58.314280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:44:58.349335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 10:44:58.349748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 10:44:58.558836       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 10:44:58.559317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 10:45:01.255066       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 10:47:22.299884       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0610 10:47:22.300030       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0610 10:47:22.300550       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0610 10:47:22.300628       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7c855a318ebb] <==
	I0610 10:47:43.938846       1 serving.go:380] Generated self-signed cert in-memory
	W0610 10:47:46.372263       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 10:47:46.374534       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 10:47:46.374646       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 10:47:46.374944       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 10:47:46.438952       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 10:47:46.439014       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:47:46.445666       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 10:47:46.445717       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 10:47:46.446509       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 10:47:46.448469       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 10:47:46.546151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 10:47:42 functional-228600 kubelet[4543]: W0610 10:47:42.527966    4543 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.144.165:8441: connect: connection refused
	Jun 10 10:47:42 functional-228600 kubelet[4543]: E0610 10:47:42.528075    4543 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.144.165:8441: connect: connection refused
	Jun 10 10:47:44 functional-228600 kubelet[4543]: I0610 10:47:44.075150    4543 kubelet_node_status.go:73] "Attempting to register node" node="functional-228600"
	Jun 10 10:47:46 functional-228600 kubelet[4543]: I0610 10:47:46.576781    4543 kubelet_node_status.go:112] "Node was previously registered" node="functional-228600"
	Jun 10 10:47:46 functional-228600 kubelet[4543]: I0610 10:47:46.577510    4543 kubelet_node_status.go:76] "Successfully registered node" node="functional-228600"
	Jun 10 10:47:46 functional-228600 kubelet[4543]: I0610 10:47:46.580137    4543 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 10 10:47:46 functional-228600 kubelet[4543]: I0610 10:47:46.581557    4543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 10 10:47:46 functional-228600 kubelet[4543]: I0610 10:47:46.923555    4543 apiserver.go:52] "Watching apiserver"
	Jun 10 10:47:46 functional-228600 kubelet[4543]: I0610 10:47:46.928724    4543 topology_manager.go:215] "Topology Admit Handler" podUID="b3716009-4a8f-457f-9f45-2960743d8939" podNamespace="kube-system" podName="kube-proxy-lpfg4"
	Jun 10 10:47:46 functional-228600 kubelet[4543]: I0610 10:47:46.928891    4543 topology_manager.go:215] "Topology Admit Handler" podUID="0efe6033-8a4b-4c49-91e0-2f4ba61b5441" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gzsvv"
	Jun 10 10:47:46 functional-228600 kubelet[4543]: I0610 10:47:46.928963    4543 topology_manager.go:215] "Topology Admit Handler" podUID="7ddb20ed-d760-437c-90c6-9dfe48efdb1f" podNamespace="kube-system" podName="storage-provisioner"
	Jun 10 10:47:46 functional-228600 kubelet[4543]: I0610 10:47:46.953681    4543 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 10 10:47:47 functional-228600 kubelet[4543]: I0610 10:47:47.031763    4543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3716009-4a8f-457f-9f45-2960743d8939-lib-modules\") pod \"kube-proxy-lpfg4\" (UID: \"b3716009-4a8f-457f-9f45-2960743d8939\") " pod="kube-system/kube-proxy-lpfg4"
	Jun 10 10:47:47 functional-228600 kubelet[4543]: I0610 10:47:47.032143    4543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3716009-4a8f-457f-9f45-2960743d8939-xtables-lock\") pod \"kube-proxy-lpfg4\" (UID: \"b3716009-4a8f-457f-9f45-2960743d8939\") " pod="kube-system/kube-proxy-lpfg4"
	Jun 10 10:47:47 functional-228600 kubelet[4543]: I0610 10:47:47.032521    4543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7ddb20ed-d760-437c-90c6-9dfe48efdb1f-tmp\") pod \"storage-provisioner\" (UID: \"7ddb20ed-d760-437c-90c6-9dfe48efdb1f\") " pod="kube-system/storage-provisioner"
	Jun 10 10:48:41 functional-228600 kubelet[4543]: E0610 10:48:41.027612    4543 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:48:41 functional-228600 kubelet[4543]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:48:41 functional-228600 kubelet[4543]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:48:41 functional-228600 kubelet[4543]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:48:41 functional-228600 kubelet[4543]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:49:41 functional-228600 kubelet[4543]: E0610 10:49:41.025582    4543 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:49:41 functional-228600 kubelet[4543]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:49:41 functional-228600 kubelet[4543]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:49:41 functional-228600 kubelet[4543]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:49:41 functional-228600 kubelet[4543]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [14092496279b] <==
	I0610 10:45:22.975941       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 10:45:22.988700       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 10:45:22.988865       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 10:45:23.007411       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 10:45:23.008043       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8c6ed72-85a2-474a-91b6-480b2c8c3c20", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-228600_ffe1dd5f-c014-45fc-9f26-7fc7f90f4b9c became leader
	I0610 10:45:23.008180       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-228600_ffe1dd5f-c014-45fc-9f26-7fc7f90f4b9c!
	I0610 10:45:23.115774       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-228600_ffe1dd5f-c014-45fc-9f26-7fc7f90f4b9c!
	
	
	==> storage-provisioner [72e6770780a5] <==
	I0610 10:47:48.251375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 10:47:48.293499       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 10:47:48.293570       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 10:48:05.715517       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 10:48:05.715694       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-228600_fccd94bb-67f6-45a5-b16c-96453c9a7127!
	I0610 10:48:05.717096       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8c6ed72-85a2-474a-91b6-480b2c8c3c20", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-228600_fccd94bb-67f6-45a5-b16c-96453c9a7127 became leader
	I0610 10:48:05.816018       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-228600_fccd94bb-67f6-45a5-b16c-96453c9a7127!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:49:56.182994   11408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-228600 -n functional-228600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-228600 -n functional-228600: (12.9938531s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-228600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (36.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-228600 config unset cpus" to be -""- but got *"W0610 10:53:17.256904    9524 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228600 config get cpus: exit status 14 (282.1367ms)

                                                
                                                
** stderr ** 
	W0610 10:53:17.588240    7412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-228600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0610 10:53:17.588240    7412 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-228600 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0610 10:53:17.898436   12692 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-228600 config get cpus" to be -""- but got *"W0610 10:53:18.207823   10060 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-228600 config unset cpus" to be -""- but got *"W0610 10:53:18.473139    3716 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228600 config get cpus: exit status 14 (228.611ms)

                                                
                                                
** stderr ** 
	W0610 10:53:18.737422    1684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-228600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0610 10:53:18.737422    1684 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228600 service --namespace=default --https --url hello-node: exit status 1 (15.0239142s)

                                                
                                                
** stderr ** 
	W0610 10:54:05.146566    9516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-228600 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228600 service hello-node --url --format={{.IP}}: exit status 1 (15.0404943s)

                                                
                                                
** stderr ** 
	W0610 10:54:20.183026    2208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-228600 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228600 service hello-node --url: exit status 1 (15.0120646s)

                                                
                                                
** stderr ** 
	W0610 10:54:35.230764    6092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-228600 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (72.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-9tfq9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-9tfq9 -- sh -c "ping -c 1 172.17.144.1"
E0610 11:14:41.851284    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-9tfq9 -- sh -c "ping -c 1 172.17.144.1": exit status 1 (10.5440012s)

                                                
                                                
-- stdout --
	PING 172.17.144.1 (172.17.144.1): 56 data bytes
	
	--- 172.17.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 11:14:31.520972    7108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.144.1) from pod (busybox-fc5497c4f-9tfq9): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-kff2v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-kff2v -- sh -c "ping -c 1 172.17.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-kff2v -- sh -c "ping -c 1 172.17.144.1": exit status 1 (10.5511693s)

                                                
                                                
-- stdout --
	PING 172.17.144.1 (172.17.144.1): 56 data bytes
	
	--- 172.17.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 11:14:42.634092    7136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.144.1) from pod (busybox-fc5497c4f-kff2v): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-s49nb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-s49nb -- sh -c "ping -c 1 172.17.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-s49nb -- sh -c "ping -c 1 172.17.144.1": exit status 1 (10.5450022s)

                                                
                                                
-- stdout --
	PING 172.17.144.1 (172.17.144.1): 56 data bytes
	
	--- 172.17.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 11:14:53.729244    7412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.144.1) from pod (busybox-fc5497c4f-s49nb): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-368100 -n ha-368100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-368100 -n ha-368100: (13.5235456s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 logs -n 25: (9.6514589s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | functional-228600 ssh pgrep          | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:56 UTC |                     |
	|         | buildkitd                            |                   |                   |         |                     |                     |
	| image   | functional-228600 image build -t     | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:56 UTC | 10 Jun 24 10:57 UTC |
	|         | localhost/my-image:functional-228600 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-228600 image ls           | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:57 UTC | 10 Jun 24 10:57 UTC |
	| delete  | -p functional-228600                 | functional-228600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:00 UTC | 10 Jun 24 11:01 UTC |
	| start   | -p ha-368100 --wait=true             | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:01 UTC | 10 Jun 24 11:13 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- apply -f             | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- rollout status       | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- get pods -o          | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- get pods -o          | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-9tfq9 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-kff2v --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-s49nb --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-9tfq9 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-kff2v --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-s49nb --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-9tfq9 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-kff2v -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-s49nb -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- get pods -o          | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-9tfq9              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC |                     |
	|         | busybox-fc5497c4f-9tfq9 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-kff2v              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC |                     |
	|         | busybox-fc5497c4f-kff2v -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | busybox-fc5497c4f-s49nb              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-368100 -- exec                 | ha-368100         | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:14 UTC |                     |
	|         | busybox-fc5497c4f-s49nb -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:01:57
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:01:57.021959   12440 out.go:291] Setting OutFile to fd 968 ...
	I0610 11:01:57.022986   12440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:01:57.022986   12440 out.go:304] Setting ErrFile to fd 944...
	I0610 11:01:57.022986   12440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:01:57.049032   12440 out.go:298] Setting JSON to false
	I0610 11:01:57.053939   12440 start.go:129] hostinfo: {"hostname":"minikube6","uptime":17205,"bootTime":1718000111,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 11:01:57.054488   12440 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 11:01:57.062945   12440 out.go:177] * [ha-368100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 11:01:57.063284   12440 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:01:57.063284   12440 notify.go:220] Checking for updates...
	I0610 11:01:57.071787   12440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:01:57.074586   12440 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 11:01:57.076886   12440 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:01:57.079532   12440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:01:57.081422   12440 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:02:02.534821   12440 out.go:177] * Using the hyperv driver based on user configuration
	I0610 11:02:02.538941   12440 start.go:297] selected driver: hyperv
	I0610 11:02:02.538989   12440 start.go:901] validating driver "hyperv" against <nil>
	I0610 11:02:02.538989   12440 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:02:02.590314   12440 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 11:02:02.592943   12440 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:02:02.593039   12440 cni.go:84] Creating CNI manager for ""
	I0610 11:02:02.593039   12440 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 11:02:02.593039   12440 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 11:02:02.593406   12440 start.go:340] cluster config:
	{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:02:02.593807   12440 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:02:02.599362   12440 out.go:177] * Starting "ha-368100" primary control-plane node in "ha-368100" cluster
	I0610 11:02:02.602436   12440 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 11:02:02.602436   12440 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 11:02:02.602436   12440 cache.go:56] Caching tarball of preloaded images
	I0610 11:02:02.603221   12440 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 11:02:02.603599   12440 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 11:02:02.603815   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:02:02.604447   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json: {Name:mk3ae4ba2ecba2ca11cb354f04b2c0d5351cff57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:02:02.605393   12440 start.go:360] acquireMachinesLock for ha-368100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:02:02.605393   12440 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-368100"
	I0610 11:02:02.605775   12440 start.go:93] Provisioning new machine with config: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:02:02.605775   12440 start.go:125] createHost starting for "" (driver="hyperv")
	I0610 11:02:02.606293   12440 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 11:02:02.609110   12440 start.go:159] libmachine.API.Create for "ha-368100" (driver="hyperv")
	I0610 11:02:02.609110   12440 client.go:168] LocalClient.Create starting
	I0610 11:02:02.609452   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 11:02:02.609998   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:02:02.609998   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:02:02.610172   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 11:02:02.610172   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:02:02.610172   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:02:02.610172   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 11:02:04.687722   12440 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 11:02:04.687722   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:04.687722   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 11:02:06.469298   12440 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 11:02:06.469298   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:06.469390   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:02:08.001965   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:02:08.010256   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:08.010256   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:02:11.717132   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:02:11.728794   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:11.731656   12440 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 11:02:12.240927   12440 main.go:141] libmachine: Creating SSH key...
	I0610 11:02:12.582680   12440 main.go:141] libmachine: Creating VM...
	I0610 11:02:12.582680   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:02:15.489701   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:02:15.489701   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:15.489701   12440 main.go:141] libmachine: Using switch "Default Switch"
	I0610 11:02:15.489701   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:02:17.245811   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:02:17.257183   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:17.257183   12440 main.go:141] libmachine: Creating VHD
	I0610 11:02:17.257183   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 11:02:21.079461   12440 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A16EA961-09AB-4873-A890-7E3ACDEEE574
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 11:02:21.090839   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:21.090839   12440 main.go:141] libmachine: Writing magic tar header
	I0610 11:02:21.090839   12440 main.go:141] libmachine: Writing SSH key tar header
	I0610 11:02:21.100255   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 11:02:24.317970   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:24.329971   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:24.329971   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\disk.vhd' -SizeBytes 20000MB
	I0610 11:02:26.954132   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:26.954132   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:26.965785   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-368100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 11:02:30.700381   12440 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-368100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 11:02:30.713337   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:30.713337   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-368100 -DynamicMemoryEnabled $false
	I0610 11:02:33.038991   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:33.050735   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:33.050735   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-368100 -Count 2
	I0610 11:02:35.296368   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:35.296368   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:35.296368   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-368100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\boot2docker.iso'
	I0610 11:02:38.015221   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:38.015221   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:38.015221   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-368100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\disk.vhd'
	I0610 11:02:40.749705   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:40.760916   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:40.760916   12440 main.go:141] libmachine: Starting VM...
	I0610 11:02:40.760916   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-368100
	I0610 11:02:44.000260   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:44.000260   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:44.000260   12440 main.go:141] libmachine: Waiting for host to start...
	I0610 11:02:44.000260   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:02:46.375753   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:02:46.376423   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:46.376649   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:02:49.020133   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:49.020133   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:50.031339   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:02:52.334705   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:02:52.334705   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:52.334705   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:02:54.941801   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:54.950698   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:55.964880   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:02:58.276903   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:02:58.276903   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:58.276903   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:00.892760   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:03:00.896525   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:01.902514   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:04.215348   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:04.219344   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:04.219416   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:06.818278   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:03:06.818278   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:07.823340   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:10.153460   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:10.153460   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:10.168780   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:12.915947   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:12.927740   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:12.927740   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:15.181510   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:15.192990   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:15.193051   12440 machine.go:94] provisionDockerMachine start ...
	I0610 11:03:15.193277   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:17.457073   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:17.468251   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:17.468251   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:20.201817   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:20.201817   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:20.207098   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:20.219301   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:20.219301   12440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:03:20.356174   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:03:20.356174   12440 buildroot.go:166] provisioning hostname "ha-368100"
	I0610 11:03:20.356262   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:22.601484   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:22.601484   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:22.605562   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:25.262174   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:25.273994   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:25.279094   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:25.279793   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:25.279793   12440 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-368100 && echo "ha-368100" | sudo tee /etc/hostname
	I0610 11:03:25.432011   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-368100
	
	I0610 11:03:25.432011   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:27.614482   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:27.614482   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:27.614482   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:30.248842   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:30.248842   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:30.255173   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:30.255972   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:30.255972   12440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-368100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-368100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-368100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:03:30.398701   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:03:30.398701   12440 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 11:03:30.398846   12440 buildroot.go:174] setting up certificates
	I0610 11:03:30.398846   12440 provision.go:84] configureAuth start
	I0610 11:03:30.398846   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:32.624633   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:32.624633   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:32.626333   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:35.444325   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:35.444325   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:35.444325   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:37.788416   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:37.788416   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:37.799957   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:40.491882   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:40.491882   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:40.503675   12440 provision.go:143] copyHostCerts
	I0610 11:03:40.503675   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 11:03:40.504394   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 11:03:40.504482   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 11:03:40.504918   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 11:03:40.505829   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 11:03:40.505829   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 11:03:40.505829   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 11:03:40.506584   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 11:03:40.507620   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 11:03:40.507823   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 11:03:40.507920   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 11:03:40.508312   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 11:03:40.508312   12440 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-368100 san=[127.0.0.1 172.17.146.64 ha-368100 localhost minikube]
	I0610 11:03:40.670397   12440 provision.go:177] copyRemoteCerts
	I0610 11:03:40.680915   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:03:40.680915   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:42.862469   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:42.873191   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:42.873240   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:45.478279   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:45.478279   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:45.478279   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:03:45.587731   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9067748s)
	I0610 11:03:45.587953   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 11:03:45.588476   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:03:45.639338   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 11:03:45.640090   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:03:45.685033   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 11:03:45.685033   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0610 11:03:45.734674   12440 provision.go:87] duration metric: took 15.335702s to configureAuth
	I0610 11:03:45.734674   12440 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:03:45.735356   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:03:45.735356   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:47.889027   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:47.900591   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:47.900591   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:50.550215   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:50.550215   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:50.555302   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:50.556110   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:50.556110   12440 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 11:03:50.694000   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 11:03:50.694061   12440 buildroot.go:70] root file system type: tmpfs
	I0610 11:03:50.694285   12440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 11:03:50.694437   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:52.880057   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:52.891918   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:52.891918   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:55.504638   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:55.516434   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:55.522104   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:55.522964   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:55.522964   12440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 11:03:55.673968   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 11:03:55.673968   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:57.812688   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:57.812688   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:57.812688   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:00.389374   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:00.389413   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:00.395008   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:04:00.395008   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:04:00.395008   12440 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 11:04:02.542726   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 11:04:02.542726   12440 machine.go:97] duration metric: took 47.349218s to provisionDockerMachine
	I0610 11:04:02.542726   12440 client.go:171] duration metric: took 1m59.9326308s to LocalClient.Create
	I0610 11:04:02.543261   12440 start.go:167] duration metric: took 1m59.933165s to libmachine.API.Create "ha-368100"
	I0610 11:04:02.543318   12440 start.go:293] postStartSetup for "ha-368100" (driver="hyperv")
	I0610 11:04:02.543318   12440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:04:02.556166   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:04:02.556166   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:04.706640   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:04.717679   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:04.717679   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:07.276432   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:07.287391   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:07.287391   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:04:07.401175   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8449697s)
	I0610 11:04:07.413215   12440 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:04:07.420665   12440 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:04:07.420752   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 11:04:07.421337   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 11:04:07.421878   12440 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 11:04:07.421878   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 11:04:07.432572   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:04:07.453826   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 11:04:07.501143   12440 start.go:296] duration metric: took 4.9577846s for postStartSetup
	I0610 11:04:07.504262   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:09.702534   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:09.714376   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:09.714376   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:12.388616   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:12.388616   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:12.388616   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:04:12.391418   12440 start.go:128] duration metric: took 2m9.7845767s to createHost
	I0610 11:04:12.391418   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:14.581655   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:14.581655   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:14.581655   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:17.151121   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:17.162963   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:17.168650   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:04:17.169181   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:04:17.169181   12440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:04:17.294716   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718017457.306506406
	
	I0610 11:04:17.294802   12440 fix.go:216] guest clock: 1718017457.306506406
	I0610 11:04:17.294802   12440 fix.go:229] Guest: 2024-06-10 11:04:17.306506406 +0000 UTC Remote: 2024-06-10 11:04:12.3914184 +0000 UTC m=+135.536038001 (delta=4.915088006s)
	I0610 11:04:17.294915   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:19.465113   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:19.475914   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:19.475914   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:22.123998   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:22.129189   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:22.134964   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:04:22.135436   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:04:22.135501   12440 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718017457
	I0610 11:04:22.285677   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 11:04:17 UTC 2024
	
	I0610 11:04:22.285708   12440 fix.go:236] clock set: Mon Jun 10 11:04:17 UTC 2024
	 (err=<nil>)
	I0610 11:04:22.285708   12440 start.go:83] releasing machines lock for "ha-368100", held for 2m19.6789681s
	I0610 11:04:22.286030   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:24.481728   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:24.492446   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:24.492446   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:27.108099   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:27.110016   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:27.114196   12440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:04:27.114196   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:27.124248   12440 ssh_runner.go:195] Run: cat /version.json
	I0610 11:04:27.124248   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:29.432647   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:29.432647   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:29.432647   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:29.445163   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:29.445163   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:29.445163   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:32.198454   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:32.198454   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:32.211162   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:04:32.225116   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:32.225116   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:32.226864   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:04:32.306394   12440 ssh_runner.go:235] Completed: cat /version.json: (5.1820414s)
	I0610 11:04:32.319337   12440 ssh_runner.go:195] Run: systemctl --version
	I0610 11:04:32.409137   12440 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2948974s)
	I0610 11:04:32.421251   12440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:04:32.430917   12440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:04:32.444411   12440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:04:32.473173   12440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:04:32.473173   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:04:32.473456   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:04:32.522113   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 11:04:32.563465   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 11:04:32.584014   12440 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 11:04:32.596106   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 11:04:32.627161   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:04:32.656288   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 11:04:32.688611   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:04:32.722100   12440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:04:32.756149   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 11:04:32.788419   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 11:04:32.819205   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 11:04:32.848460   12440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:04:32.883037   12440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:04:32.915036   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:33.126222   12440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 11:04:33.161246   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:04:33.173871   12440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 11:04:33.213912   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:04:33.251251   12440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:04:33.292263   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:04:33.331796   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:04:33.367469   12440 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 11:04:33.443930   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:04:33.468729   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:04:33.514690   12440 ssh_runner.go:195] Run: which cri-dockerd
	I0610 11:04:33.534149   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 11:04:33.551078   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 11:04:33.595733   12440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 11:04:33.797400   12440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 11:04:34.001100   12440 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 11:04:34.001225   12440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 11:04:34.049571   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:34.246973   12440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 11:04:36.798735   12440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517407s)
	I0610 11:04:36.812035   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 11:04:36.848395   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:04:36.884185   12440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 11:04:37.097548   12440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 11:04:37.326309   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:37.560131   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 11:04:37.609297   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:04:37.645012   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:37.859606   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 11:04:37.982526   12440 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 11:04:37.995760   12440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 11:04:38.012340   12440 start.go:562] Will wait 60s for crictl version
	I0610 11:04:38.027626   12440 ssh_runner.go:195] Run: which crictl
	I0610 11:04:38.051495   12440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:04:38.124781   12440 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 11:04:38.135366   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:04:38.179142   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:04:38.217010   12440 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 11:04:38.217073   12440 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 11:04:38.222178   12440 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 11:04:38.222178   12440 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 11:04:38.222178   12440 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 11:04:38.222178   12440 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 11:04:38.225863   12440 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 11:04:38.225863   12440 ip.go:210] interface addr: 172.17.144.1/20
	I0610 11:04:38.237472   12440 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 11:04:38.240252   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:04:38.277265   12440 kubeadm.go:877] updating cluster {Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:04:38.277854   12440 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 11:04:38.286946   12440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 11:04:38.308539   12440 docker.go:685] Got preloaded images: 
	I0610 11:04:38.308539   12440 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0610 11:04:38.320838   12440 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 11:04:38.352211   12440 ssh_runner.go:195] Run: which lz4
	I0610 11:04:38.358513   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 11:04:38.370904   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 11:04:38.377367   12440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 11:04:38.377367   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0610 11:04:40.698429   12440 docker.go:649] duration metric: took 2.3396203s to copy over tarball
	I0610 11:04:40.711381   12440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 11:04:49.326132   12440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146327s)
	I0610 11:04:49.326132   12440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 11:04:49.393166   12440 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 11:04:49.411533   12440 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0610 11:04:49.459436   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:49.692196   12440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 11:04:52.830026   12440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1314938s)
	I0610 11:04:52.843590   12440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 11:04:52.874546   12440 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 11:04:52.874546   12440 cache_images.go:84] Images are preloaded, skipping loading
	I0610 11:04:52.874546   12440 kubeadm.go:928] updating node { 172.17.146.64 8443 v1.30.1 docker true true} ...
	I0610 11:04:52.874546   12440 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-368100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.146.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:04:52.887211   12440 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 11:04:52.930870   12440 cni.go:84] Creating CNI manager for ""
	I0610 11:04:52.930939   12440 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 11:04:52.930939   12440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:04:52.930939   12440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.146.64 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-368100 NodeName:ha-368100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.146.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.146.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 11:04:52.931109   12440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.146.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-368100"
	  kubeletExtraArgs:
	    node-ip: 172.17.146.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.146.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:04:52.931109   12440 kube-vip.go:115] generating kube-vip config ...
	I0610 11:04:52.943867   12440 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 11:04:52.970787   12440 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 11:04:52.975030   12440 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0610 11:04:52.990109   12440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:04:53.008628   12440 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:04:53.019350   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0610 11:04:53.047435   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0610 11:04:53.092622   12440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:04:53.129469   12440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0610 11:04:53.170544   12440 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0610 11:04:53.225214   12440 ssh_runner.go:195] Run: grep 172.17.159.254	control-plane.minikube.internal$ /etc/hosts
	I0610 11:04:53.230321   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:04:53.271797   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:53.481656   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:04:53.510196   12440 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100 for IP: 172.17.146.64
	I0610 11:04:53.510196   12440 certs.go:194] generating shared ca certs ...
	I0610 11:04:53.510196   12440 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.510537   12440 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 11:04:53.511300   12440 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 11:04:53.511300   12440 certs.go:256] generating profile certs ...
	I0610 11:04:53.512754   12440 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key
	I0610 11:04:53.513010   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.crt with IP's: []
	I0610 11:04:53.606090   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.crt ...
	I0610 11:04:53.606090   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.crt: {Name:mk2a90b8a3b74b17766eccbbc7eb46ce1b98ceeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.609586   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key ...
	I0610 11:04:53.609586   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key: {Name:mk39c314ca788ad0206c8642c3190c202dbc04c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.611029   12440 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.0d7aa7cf
	I0610 11:04:53.611029   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.0d7aa7cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.146.64 172.17.159.254]
	I0610 11:04:53.731539   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.0d7aa7cf ...
	I0610 11:04:53.731539   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.0d7aa7cf: {Name:mk7a21b8eaf4af1418373c971f9fa2b030f5ba9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.736711   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.0d7aa7cf ...
	I0610 11:04:53.736711   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.0d7aa7cf: {Name:mk60fe7b12e81d355e1985baf98674887649e60b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.737900   12440 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.0d7aa7cf -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt
	I0610 11:04:53.755477   12440 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.0d7aa7cf -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key
	I0610 11:04:53.756752   12440 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key
	I0610 11:04:53.756752   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt with IP's: []
	I0610 11:04:53.875815   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt ...
	I0610 11:04:53.875815   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt: {Name:mk1234aefdfbc9800322c56901472a33ef071cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.883806   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key ...
	I0610 11:04:53.883806   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key: {Name:mke6191aa44f1764991acc108c6dbcfd72efa276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.885078   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 11:04:53.885078   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 11:04:53.886244   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 11:04:53.886476   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 11:04:53.886702   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 11:04:53.886870   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 11:04:53.887042   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 11:04:53.887190   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 11:04:53.887190   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 11:04:53.897079   12440 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 11:04:53.897079   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 11:04:53.897467   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 11:04:53.897859   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 11:04:53.898264   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 11:04:53.898498   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 11:04:53.898498   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:04:53.899047   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 11:04:53.899361   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 11:04:53.899642   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:04:53.944039   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:04:53.997624   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:04:54.045119   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 11:04:54.091554   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 11:04:54.135936   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 11:04:54.184277   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:04:54.227351   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:04:54.272845   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:04:54.318334   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 11:04:54.367569   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 11:04:54.410275   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:04:54.460823   12440 ssh_runner.go:195] Run: openssl version
	I0610 11:04:54.482114   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:04:54.515962   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:04:54.522457   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:04:54.536667   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:04:54.557960   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:04:54.598422   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 11:04:54.633477   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 11:04:54.641525   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 11:04:54.656122   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 11:04:54.682475   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 11:04:54.718620   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 11:04:54.753843   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 11:04:54.763678   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 11:04:54.775076   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 11:04:54.798882   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:04:54.829193   12440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:04:54.837714   12440 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 11:04:54.837714   12440 kubeadm.go:391] StartCluster: {Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:04:54.846573   12440 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 11:04:54.886378   12440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 11:04:54.920495   12440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:04:54.950571   12440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:04:54.975120   12440 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:04:54.975120   12440 kubeadm.go:156] found existing configuration files:
	
	I0610 11:04:54.986502   12440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:04:55.006300   12440 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:04:55.017815   12440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:04:55.051224   12440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:04:55.072465   12440 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:04:55.085408   12440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:04:55.115640   12440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:04:55.133141   12440 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:04:55.146882   12440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:04:55.172674   12440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:04:55.191555   12440 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:04:55.203515   12440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:04:55.222570   12440 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:04:55.696387   12440 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:05:10.634849   12440 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 11:05:10.635021   12440 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:05:10.635196   12440 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:05:10.635305   12440 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:05:10.635615   12440 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:05:10.635839   12440 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:05:10.639877   12440 out.go:204]   - Generating certificates and keys ...
	I0610 11:05:10.640444   12440 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:05:10.640673   12440 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:05:10.640673   12440 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 11:05:10.640673   12440 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 11:05:10.640673   12440 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 11:05:10.641386   12440 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 11:05:10.641386   12440 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 11:05:10.641386   12440 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-368100 localhost] and IPs [172.17.146.64 127.0.0.1 ::1]
	I0610 11:05:10.641930   12440 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 11:05:10.642092   12440 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-368100 localhost] and IPs [172.17.146.64 127.0.0.1 ::1]
	I0610 11:05:10.642092   12440 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 11:05:10.642092   12440 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 11:05:10.642092   12440 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 11:05:10.642092   12440 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:05:10.642092   12440 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:05:10.642092   12440 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 11:05:10.643198   12440 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:05:10.643387   12440 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:05:10.643387   12440 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:05:10.643387   12440 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:05:10.643387   12440 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:05:10.646726   12440 out.go:204]   - Booting up control plane ...
	I0610 11:05:10.646812   12440 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:05:10.647975   12440 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 11:05:10.647975   12440 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 11:05:10.647975   12440 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.574543ms
	I0610 11:05:10.648708   12440 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 11:05:10.648708   12440 kubeadm.go:309] [api-check] The API server is healthy after 8.003604396s
	I0610 11:05:10.648708   12440 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 11:05:10.648708   12440 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 11:05:10.648708   12440 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 11:05:10.648708   12440 kubeadm.go:309] [mark-control-plane] Marking the node ha-368100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 11:05:10.648708   12440 kubeadm.go:309] [bootstrap-token] Using token: 32k9jv.cizb7zxknrcsuenl
	I0610 11:05:10.652141   12440 out.go:204]   - Configuring RBAC rules ...
	I0610 11:05:10.652141   12440 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 11:05:10.652141   12440 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 11:05:10.653723   12440 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 11:05:10.653723   12440 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 11:05:10.653723   12440 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 11:05:10.653723   12440 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 11:05:10.653723   12440 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 11:05:10.653723   12440 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 11:05:10.653723   12440 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 11:05:10.653723   12440 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 11:05:10.653723   12440 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 11:05:10.653723   12440 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 11:05:10.653723   12440 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 11:05:10.653723   12440 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 32k9jv.cizb7zxknrcsuenl \
	I0610 11:05:10.653723   12440 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 \
	I0610 11:05:10.653723   12440 kubeadm.go:309] 	--control-plane 
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 32k9jv.cizb7zxknrcsuenl \
	I0610 11:05:10.657964   12440 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 11:05:10.658022   12440 cni.go:84] Creating CNI manager for ""
	I0610 11:05:10.658022   12440 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 11:05:10.658357   12440 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 11:05:10.675642   12440 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 11:05:10.684461   12440 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 11:05:10.684572   12440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 11:05:10.731840   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 11:05:11.343674   12440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 11:05:11.356292   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:11.359401   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-368100 minikube.k8s.io/updated_at=2024_06_10T11_05_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-368100 minikube.k8s.io/primary=true
	I0610 11:05:11.383558   12440 ops.go:34] apiserver oom_adj: -16
	I0610 11:05:11.613539   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:12.128141   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:12.613092   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:13.120355   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:13.612960   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:14.112905   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:14.613297   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:15.116439   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:15.619729   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:16.115745   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:16.615031   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:17.120418   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:17.624669   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:18.113893   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:18.616014   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:19.116938   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:19.629882   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:20.134013   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:20.628247   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:21.122092   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:21.617098   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:22.118191   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:22.616300   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:23.124398   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:23.623705   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:24.127871   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:24.270783   12440 kubeadm.go:1107] duration metric: took 12.9269448s to wait for elevateKubeSystemPrivileges
	W0610 11:05:24.270885   12440 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 11:05:24.270885   12440 kubeadm.go:393] duration metric: took 29.4329292s to StartCluster
	I0610 11:05:24.270885   12440 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:05:24.271057   12440 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:05:24.273195   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:05:24.274615   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 11:05:24.274870   12440 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:05:24.274926   12440 start.go:240] waiting for startup goroutines ...
	I0610 11:05:24.274870   12440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 11:05:24.275091   12440 addons.go:69] Setting storage-provisioner=true in profile "ha-368100"
	I0610 11:05:24.275197   12440 addons.go:234] Setting addon storage-provisioner=true in "ha-368100"
	I0610 11:05:24.275394   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:05:24.276816   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:05:24.277348   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:24.278197   12440 addons.go:69] Setting default-storageclass=true in profile "ha-368100"
	I0610 11:05:24.278197   12440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-368100"
	I0610 11:05:24.279108   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:24.420262   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 11:05:24.941079   12440 start.go:946] {"host.minikube.internal": 172.17.144.1} host record injected into CoreDNS's ConfigMap
	I0610 11:05:26.686354   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:26.686354   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:26.699940   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:26.699940   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:26.706445   12440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:05:26.700411   12440 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:05:26.709423   12440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:05:26.709423   12440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 11:05:26.709521   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:26.709521   12440 kapi.go:59] client config for ha-368100: &rest.Config{Host:"https://172.17.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 11:05:26.710832   12440 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 11:05:26.711781   12440 addons.go:234] Setting addon default-storageclass=true in "ha-368100"
	I0610 11:05:26.711781   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:05:26.712636   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:29.169044   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:29.169044   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:29.172503   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:05:29.226862   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:29.226862   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:29.231269   12440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 11:05:29.231269   12440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 11:05:29.231269   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:31.594701   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:31.594764   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:31.594825   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:05:32.027481   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:05:32.027481   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:32.028489   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:05:32.184636   12440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:05:34.372363   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:05:34.384229   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:34.384708   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:05:34.511307   12440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 11:05:34.664026   12440 round_trippers.go:463] GET https://172.17.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 11:05:34.664026   12440 round_trippers.go:469] Request Headers:
	I0610 11:05:34.664026   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:05:34.664026   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:05:34.679930   12440 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0610 11:05:34.680575   12440 round_trippers.go:463] PUT https://172.17.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 11:05:34.680575   12440 round_trippers.go:469] Request Headers:
	I0610 11:05:34.680575   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:05:34.680575   12440 round_trippers.go:473]     Content-Type: application/json
	I0610 11:05:34.680575   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:05:34.683700   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:05:34.688602   12440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 11:05:34.692355   12440 addons.go:510] duration metric: took 10.4174649s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0610 11:05:34.692355   12440 start.go:245] waiting for cluster config update ...
	I0610 11:05:34.692355   12440 start.go:254] writing updated cluster config ...
	I0610 11:05:34.695813   12440 out.go:177] 
	I0610 11:05:34.706396   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:05:34.706716   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:05:34.713641   12440 out.go:177] * Starting "ha-368100-m02" control-plane node in "ha-368100" cluster
	I0610 11:05:34.715656   12440 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 11:05:34.715656   12440 cache.go:56] Caching tarball of preloaded images
	I0610 11:05:34.715656   12440 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 11:05:34.715656   12440 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 11:05:34.715656   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:05:34.720412   12440 start.go:360] acquireMachinesLock for ha-368100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:05:34.720548   12440 start.go:364] duration metric: took 136.4µs to acquireMachinesLock for "ha-368100-m02"
	I0610 11:05:34.720775   12440 start.go:93] Provisioning new machine with config: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:05:34.720775   12440 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0610 11:05:34.721589   12440 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 11:05:34.721589   12440 start.go:159] libmachine.API.Create for "ha-368100" (driver="hyperv")
	I0610 11:05:34.725091   12440 client.go:168] LocalClient.Create starting
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:05:34.726267   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:05:34.726267   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 11:05:36.669146   12440 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 11:05:36.669146   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:36.678261   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 11:05:38.459777   12440 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 11:05:38.459777   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:38.463128   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:05:40.029214   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:05:40.029214   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:40.029214   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:05:43.779374   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:05:43.791301   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:43.794327   12440 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 11:05:44.294743   12440 main.go:141] libmachine: Creating SSH key...
	I0610 11:05:44.754791   12440 main.go:141] libmachine: Creating VM...
	I0610 11:05:44.754791   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:05:47.715940   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:05:47.715940   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:47.715940   12440 main.go:141] libmachine: Using switch "Default Switch"
	I0610 11:05:47.728151   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:05:49.523178   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:05:49.523178   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:49.523178   12440 main.go:141] libmachine: Creating VHD
	I0610 11:05:49.532162   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 11:05:53.455885   12440 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5890BBC2-C159-4447-8A45-AC73CC907BB4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 11:05:53.455885   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:53.455885   12440 main.go:141] libmachine: Writing magic tar header
	I0610 11:05:53.467821   12440 main.go:141] libmachine: Writing SSH key tar header
	I0610 11:05:53.477568   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 11:05:56.705478   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:05:56.716394   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:56.716394   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\disk.vhd' -SizeBytes 20000MB
	I0610 11:05:59.329886   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:05:59.342333   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:59.342333   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-368100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 11:06:03.095745   12440 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-368100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 11:06:03.101476   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:03.101476   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-368100-m02 -DynamicMemoryEnabled $false
	I0610 11:06:05.414556   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:05.414556   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:05.425452   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-368100-m02 -Count 2
	I0610 11:06:07.710644   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:07.710644   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:07.710644   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-368100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\boot2docker.iso'
	I0610 11:06:10.402317   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:10.413282   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:10.413282   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-368100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\disk.vhd'
	I0610 11:06:13.211502   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:13.211502   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:13.211502   12440 main.go:141] libmachine: Starting VM...
	I0610 11:06:13.211502   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-368100-m02
	I0610 11:06:16.386038   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:16.386038   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:16.386038   12440 main.go:141] libmachine: Waiting for host to start...
	I0610 11:06:16.388127   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:18.790351   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:18.796013   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:18.796013   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:21.455999   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:21.456057   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:22.469472   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:24.786678   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:24.786678   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:24.786743   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:27.449360   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:27.449399   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:28.463355   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:30.805837   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:30.805837   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:30.805837   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:33.487310   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:33.487352   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:34.500449   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:36.802224   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:36.802224   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:36.805010   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:39.495387   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:39.497780   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:40.513112   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:42.939759   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:42.939759   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:42.941066   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:45.661914   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:06:45.661914   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:45.661914   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:47.944336   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:47.944336   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:47.946867   12440 machine.go:94] provisionDockerMachine start ...
	I0610 11:06:47.946867   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:50.270564   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:50.270564   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:50.270564   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:52.985488   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:06:52.985488   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:52.993231   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:06:53.002410   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:06:53.002410   12440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:06:53.149356   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:06:53.149356   12440 buildroot.go:166] provisioning hostname "ha-368100-m02"
	I0610 11:06:53.149356   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:55.398079   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:55.398079   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:55.406195   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:58.011924   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:06:58.023354   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:58.028860   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:06:58.029648   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:06:58.029648   12440 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-368100-m02 && echo "ha-368100-m02" | sudo tee /etc/hostname
	I0610 11:06:58.200876   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-368100-m02
	
	I0610 11:06:58.200876   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:00.433029   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:00.436943   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:00.436943   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:03.072094   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:03.072094   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:03.084613   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:03.084791   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:03.084791   12440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-368100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-368100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-368100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:07:03.237163   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:07:03.237163   12440 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 11:07:03.237163   12440 buildroot.go:174] setting up certificates
	I0610 11:07:03.237163   12440 provision.go:84] configureAuth start
	I0610 11:07:03.237163   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:05.446322   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:05.458015   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:05.458393   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:08.150210   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:08.162252   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:08.162252   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:10.398427   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:10.398427   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:10.410604   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:13.113397   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:13.113397   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:13.113397   12440 provision.go:143] copyHostCerts
	I0610 11:07:13.113397   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 11:07:13.114086   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 11:07:13.114146   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 11:07:13.114299   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 11:07:13.115590   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 11:07:13.115817   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 11:07:13.115817   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 11:07:13.115817   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 11:07:13.117059   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 11:07:13.117433   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 11:07:13.117433   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 11:07:13.117509   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 11:07:13.118729   12440 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-368100-m02 san=[127.0.0.1 172.17.157.100 ha-368100-m02 localhost minikube]
	I0610 11:07:13.482499   12440 provision.go:177] copyRemoteCerts
	I0610 11:07:13.492832   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:07:13.492832   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:15.747120   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:15.759013   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:15.759013   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:18.415303   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:18.415303   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:18.426918   12440 sshutil.go:53] new ssh client: &{IP:172.17.157.100 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\id_rsa Username:docker}
	I0610 11:07:18.542108   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.049172s)
	I0610 11:07:18.542108   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 11:07:18.542108   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:07:18.590588   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 11:07:18.590839   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 11:07:18.639326   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 11:07:18.639515   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:07:18.693044   12440 provision.go:87] duration metric: took 15.4556989s to configureAuth
	I0610 11:07:18.693044   12440 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:07:18.693672   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:07:18.693672   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:20.932378   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:20.932378   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:20.943502   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:23.623583   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:23.623583   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:23.641572   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:23.642117   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:23.642117   12440 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 11:07:23.782636   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 11:07:23.782636   12440 buildroot.go:70] root file system type: tmpfs
	I0610 11:07:23.782636   12440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 11:07:23.782636   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:25.990128   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:26.000869   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:26.000977   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:28.660797   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:28.660797   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:28.677498   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:28.678024   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:28.678193   12440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.146.64"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 11:07:28.850877   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.146.64
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 11:07:28.850877   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:31.120865   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:31.129265   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:31.129649   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:33.782449   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:33.794517   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:33.799879   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:33.800495   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:33.801097   12440 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 11:07:35.960455   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 11:07:35.960455   12440 machine.go:97] duration metric: took 48.0131942s to provisionDockerMachine
	I0610 11:07:35.960455   12440 client.go:171] duration metric: took 2m1.2343262s to LocalClient.Create
	I0610 11:07:35.960455   12440 start.go:167] duration metric: took 2m1.2378712s to libmachine.API.Create "ha-368100"
	I0610 11:07:35.960455   12440 start.go:293] postStartSetup for "ha-368100-m02" (driver="hyperv")
	I0610 11:07:35.960455   12440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:07:35.975212   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:07:35.975212   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:38.214881   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:38.214881   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:38.214881   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:40.945057   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:40.956843   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:40.957304   12440 sshutil.go:53] new ssh client: &{IP:172.17.157.100 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\id_rsa Username:docker}
	I0610 11:07:41.067052   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0916627s)
	I0610 11:07:41.078391   12440 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:07:41.087283   12440 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:07:41.087283   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 11:07:41.087466   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 11:07:41.088509   12440 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 11:07:41.088592   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 11:07:41.098643   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:07:41.124772   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 11:07:41.182046   12440 start.go:296] duration metric: took 5.2215482s for postStartSetup
	I0610 11:07:41.184646   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:43.487837   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:43.487837   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:43.499562   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:46.146918   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:46.146918   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:46.158701   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:07:46.161348   12440 start.go:128] duration metric: took 2m11.4394948s to createHost
	I0610 11:07:46.161450   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:48.397158   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:48.397158   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:48.397248   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:51.072071   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:51.072071   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:51.088369   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:51.088505   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:51.088505   12440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:07:51.225224   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718017671.222315860
	
	I0610 11:07:51.225359   12440 fix.go:216] guest clock: 1718017671.222315860
	I0610 11:07:51.225359   12440 fix.go:229] Guest: 2024-06-10 11:07:51.22231586 +0000 UTC Remote: 2024-06-10 11:07:46.1614505 +0000 UTC m=+349.304317201 (delta=5.06086536s)
	I0610 11:07:51.225472   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:53.455300   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:53.466243   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:53.466243   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:56.078244   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:56.078244   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:56.095662   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:56.096356   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:56.096356   12440 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718017671
	I0610 11:07:56.261460   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 11:07:51 UTC 2024
	
	I0610 11:07:56.261485   12440 fix.go:236] clock set: Mon Jun 10 11:07:51 UTC 2024
	 (err=<nil>)
	I0610 11:07:56.261485   12440 start.go:83] releasing machines lock for "ha-368100-m02", held for 2m21.5396776s
	I0610 11:07:56.261485   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:58.548735   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:58.560574   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:58.560574   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:01.185177   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:08:01.185177   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:01.199882   12440 out.go:177] * Found network options:
	I0610 11:08:01.202263   12440 out.go:177]   - NO_PROXY=172.17.146.64
	W0610 11:08:01.204329   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 11:08:01.210018   12440 out.go:177]   - NO_PROXY=172.17.146.64
	W0610 11:08:01.212187   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:08:01.214359   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 11:08:01.215592   12440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:08:01.215592   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:08:01.227487   12440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 11:08:01.227487   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:08:03.502728   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:03.502784   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:03.502784   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:03.515565   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:03.515565   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:03.515565   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:06.284817   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:08:06.284881   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:06.284881   12440 sshutil.go:53] new ssh client: &{IP:172.17.157.100 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\id_rsa Username:docker}
	I0610 11:08:06.310575   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:08:06.310575   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:06.310575   12440 sshutil.go:53] new ssh client: &{IP:172.17.157.100 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\id_rsa Username:docker}
	I0610 11:08:06.436788   12440 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2092589s)
	W0610 11:08:06.436788   12440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:08:06.436788   12440 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2211535s)
	I0610 11:08:06.449762   12440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:08:06.490382   12440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:08:06.490382   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:08:06.490382   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:08:06.546536   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 11:08:06.580658   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 11:08:06.601189   12440 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 11:08:06.615653   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 11:08:06.649669   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:08:06.681082   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 11:08:06.715149   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:08:06.751606   12440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:08:06.789191   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 11:08:06.823439   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 11:08:06.858778   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 11:08:06.901030   12440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:08:06.933931   12440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:08:06.964057   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:07.189106   12440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 11:08:07.229555   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:08:07.246311   12440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 11:08:07.286747   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:08:07.334693   12440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:08:07.384197   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:08:07.424748   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:08:07.463491   12440 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 11:08:07.531605   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:08:07.564024   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:08:07.612157   12440 ssh_runner.go:195] Run: which cri-dockerd
	I0610 11:08:07.633211   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 11:08:07.653176   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 11:08:07.701670   12440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 11:08:07.929897   12440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 11:08:08.136834   12440 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 11:08:08.137399   12440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 11:08:08.188562   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:08.387661   12440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 11:08:10.929454   12440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5417042s)
	I0610 11:08:10.940332   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 11:08:10.978323   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:08:11.020669   12440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 11:08:11.244716   12440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 11:08:11.472655   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:11.693912   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 11:08:11.742600   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:08:11.785246   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:12.008591   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 11:08:12.124867   12440 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 11:08:12.136514   12440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 11:08:12.146196   12440 start.go:562] Will wait 60s for crictl version
	I0610 11:08:12.158153   12440 ssh_runner.go:195] Run: which crictl
	I0610 11:08:12.179929   12440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:08:12.239786   12440 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 11:08:12.249568   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:08:12.296044   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:08:12.336132   12440 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 11:08:12.339347   12440 out.go:177]   - env NO_PROXY=172.17.146.64
	I0610 11:08:12.342318   12440 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 11:08:12.346320   12440 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 11:08:12.346320   12440 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 11:08:12.346320   12440 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 11:08:12.346320   12440 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 11:08:12.349468   12440 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 11:08:12.349468   12440 ip.go:210] interface addr: 172.17.144.1/20
	I0610 11:08:12.360325   12440 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 11:08:12.367859   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:08:12.395300   12440 mustload.go:65] Loading cluster: ha-368100
	I0610 11:08:12.396165   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:08:12.396560   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:08:14.717593   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:14.717593   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:14.717593   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:08:14.719322   12440 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100 for IP: 172.17.157.100
	I0610 11:08:14.719322   12440 certs.go:194] generating shared ca certs ...
	I0610 11:08:14.719322   12440 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:08:14.719968   12440 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 11:08:14.720261   12440 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 11:08:14.720577   12440 certs.go:256] generating profile certs ...
	I0610 11:08:14.721281   12440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key
	I0610 11:08:14.721466   12440 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.5909f899
	I0610 11:08:14.721621   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.5909f899 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.146.64 172.17.157.100 172.17.159.254]
	I0610 11:08:14.863861   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.5909f899 ...
	I0610 11:08:14.863861   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.5909f899: {Name:mk463dc3dcad723bb6b1c6d1738104e2013b59d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:08:14.865820   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.5909f899 ...
	I0610 11:08:14.865820   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.5909f899: {Name:mke0c4b1f4fcbf88f651555043d45504a3e9dcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:08:14.866281   12440 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.5909f899 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt
	I0610 11:08:14.881196   12440 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.5909f899 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key
	I0610 11:08:14.882831   12440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key
	I0610 11:08:14.882831   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 11:08:14.883111   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 11:08:14.883295   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 11:08:14.883482   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 11:08:14.883618   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 11:08:14.883769   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 11:08:14.884062   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 11:08:14.884205   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 11:08:14.884820   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 11:08:14.885081   12440 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 11:08:14.885253   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 11:08:14.885278   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 11:08:14.885278   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 11:08:14.885855   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 11:08:14.886431   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 11:08:14.886678   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:08:14.886909   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 11:08:14.886909   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 11:08:14.886909   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:08:17.174617   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:17.175321   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:17.175321   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:19.955060   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:08:19.956119   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:19.956202   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:08:20.063801   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0610 11:08:20.072105   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0610 11:08:20.112095   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0610 11:08:20.119826   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0610 11:08:20.160902   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0610 11:08:20.169321   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0610 11:08:20.210037   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0610 11:08:20.218213   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0610 11:08:20.250069   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0610 11:08:20.256337   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0610 11:08:20.295355   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0610 11:08:20.302760   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0610 11:08:20.326515   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:08:20.377632   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:08:20.430002   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:08:20.480385   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 11:08:20.534108   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0610 11:08:20.588324   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 11:08:20.641818   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:08:20.692709   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:08:20.744729   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:08:20.796800   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 11:08:20.849827   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 11:08:20.896970   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0610 11:08:20.932791   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0610 11:08:20.973764   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0610 11:08:21.009499   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0610 11:08:21.053301   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0610 11:08:21.091096   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0610 11:08:21.125710   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0610 11:08:21.177619   12440 ssh_runner.go:195] Run: openssl version
	I0610 11:08:21.199444   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:08:21.232216   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:08:21.239772   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:08:21.251990   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:08:21.278629   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:08:21.320430   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 11:08:21.364801   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 11:08:21.372486   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 11:08:21.388577   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 11:08:21.412501   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 11:08:21.453160   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 11:08:21.489863   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 11:08:21.498837   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 11:08:21.515354   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 11:08:21.540785   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:08:21.575522   12440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:08:21.585702   12440 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 11:08:21.585702   12440 kubeadm.go:928] updating node {m02 172.17.157.100 8443 v1.30.1 docker true true} ...
	I0610 11:08:21.586335   12440 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-368100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.157.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:08:21.586413   12440 kube-vip.go:115] generating kube-vip config ...
	I0610 11:08:21.601305   12440 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 11:08:21.632532   12440 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 11:08:21.632532   12440 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 11:08:21.645822   12440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:08:21.666493   12440 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 11:08:21.680908   12440 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 11:08:21.709502   12440 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0610 11:08:21.709629   12440 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0610 11:08:21.709690   12440 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0610 11:08:22.813561   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:08:22.821563   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:08:22.836520   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 11:08:22.836520   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 11:08:22.894365   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:08:22.905292   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:08:22.935951   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 11:08:22.936099   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 11:08:23.188411   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:08:23.259018   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:08:23.271007   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:08:23.291020   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 11:08:23.291435   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 11:08:24.311912   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0610 11:08:24.334287   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0610 11:08:24.367044   12440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:08:24.405499   12440 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 11:08:24.452486   12440 ssh_runner.go:195] Run: grep 172.17.159.254	control-plane.minikube.internal$ /etc/hosts
	I0610 11:08:24.461431   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:08:24.498123   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:24.715537   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:08:24.750259   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:08:24.751253   12440 start.go:316] joinCluster: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:08:24.751454   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 11:08:24.751750   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:08:27.081776   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:27.081862   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:27.081937   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:29.819955   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:08:29.819990   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:29.820513   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:08:30.229641   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.477904s)
	I0610 11:08:30.229641   12440 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:08:30.229641   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lrj7dv.bzd7vf2qmy1wuf5a --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-368100-m02 --control-plane --apiserver-advertise-address=172.17.157.100 --apiserver-bind-port=8443"
	I0610 11:09:14.574915   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lrj7dv.bzd7vf2qmy1wuf5a --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-368100-m02 --control-plane --apiserver-advertise-address=172.17.157.100 --apiserver-bind-port=8443": (44.3449062s)
	I0610 11:09:14.574915   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 11:09:15.486935   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-368100-m02 minikube.k8s.io/updated_at=2024_06_10T11_09_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-368100 minikube.k8s.io/primary=false
	I0610 11:09:15.674773   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-368100-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0610 11:09:15.861978   12440 start.go:318] duration metric: took 51.110358s to joinCluster
	I0610 11:09:15.861978   12440 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:09:15.864907   12440 out.go:177] * Verifying Kubernetes components...
	I0610 11:09:15.862695   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:09:15.881503   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:09:16.370736   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:09:16.411082   12440 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:09:16.411951   12440 kapi.go:59] client config for ha-368100: &rest.Config{Host:"https://172.17.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0610 11:09:16.412083   12440 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.159.254:8443 with https://172.17.146.64:8443
	I0610 11:09:16.412897   12440 node_ready.go:35] waiting up to 6m0s for node "ha-368100-m02" to be "Ready" ...
	I0610 11:09:16.413105   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:16.413238   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:16.413340   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:16.413340   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:16.434932   12440 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0610 11:09:16.914144   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:16.914144   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:16.914144   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:16.914144   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:16.920838   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:17.420633   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:17.420633   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:17.420633   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:17.420633   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:17.431628   12440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 11:09:17.913617   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:17.913778   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:17.913778   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:17.913778   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:17.920236   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:18.418826   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:18.419012   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:18.419012   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:18.419012   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:18.426372   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:09:18.426372   12440 node_ready.go:53] node "ha-368100-m02" has status "Ready":"False"
	I0610 11:09:18.913581   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:18.913581   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:18.913581   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:18.913581   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:18.918510   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:19.422373   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:19.422373   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:19.422373   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:19.422373   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:19.636492   12440 round_trippers.go:574] Response Status: 200 OK in 214 milliseconds
	I0610 11:09:19.913557   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:19.913622   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:19.913729   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:19.913729   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:19.919173   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:20.422852   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:20.422852   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:20.422852   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:20.422852   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:20.482664   12440 round_trippers.go:574] Response Status: 200 OK in 59 milliseconds
	I0610 11:09:20.483559   12440 node_ready.go:53] node "ha-368100-m02" has status "Ready":"False"
	I0610 11:09:20.913578   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:20.913712   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:20.913712   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:20.913712   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:20.918544   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:21.417231   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:21.417231   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:21.417350   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:21.417350   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:21.422233   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:21.923076   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:21.923076   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:21.923076   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:21.923076   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:21.932535   12440 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:09:22.427678   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:22.427678   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.427678   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.427678   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.434112   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:22.434716   12440 node_ready.go:49] node "ha-368100-m02" has status "Ready":"True"
	I0610 11:09:22.434716   12440 node_ready.go:38] duration metric: took 6.0217026s for node "ha-368100-m02" to be "Ready" ...
	I0610 11:09:22.434947   12440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:09:22.435043   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:22.435043   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.435043   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.435043   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.448463   12440 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0610 11:09:22.457470   12440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.457470   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2jsrh
	I0610 11:09:22.457470   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.457470   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.457470   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.461460   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:22.462608   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:22.462667   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.462667   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.462667   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.465464   12440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:09:22.466998   12440 pod_ready.go:92] pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:22.466998   12440 pod_ready.go:81] duration metric: took 9.5284ms for pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.466998   12440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.466998   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dl8r2
	I0610 11:09:22.466998   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.466998   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.466998   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.471730   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:22.472793   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:22.472793   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.472793   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.472793   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.477353   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:22.477353   12440 pod_ready.go:92] pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:22.478349   12440 pod_ready.go:81] duration metric: took 11.3503ms for pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.478349   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.478349   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100
	I0610 11:09:22.478349   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.478349   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.478349   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.481361   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:22.482349   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:22.482349   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.482349   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.482349   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.485352   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:22.486357   12440 pod_ready.go:92] pod "etcd-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:22.486357   12440 pod_ready.go:81] duration metric: took 8.0081ms for pod "etcd-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.486357   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.486357   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:22.486357   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.486357   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.486357   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.492345   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:22.493348   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:22.493348   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.493348   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.493348   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.497392   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:22.992283   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:22.992361   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.992478   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.992478   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.996975   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:22.998706   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:22.998779   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.998779   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.998779   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:23.005659   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:23.491589   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:23.491589   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:23.491589   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:23.491589   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:23.496186   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:23.497244   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:23.497244   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:23.497244   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:23.497244   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:23.501637   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:23.992874   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:23.992874   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:23.992874   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:23.992874   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:23.997949   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:23.998825   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:23.998893   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:23.998893   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:23.998893   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.003517   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:24.492575   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:24.492633   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.492633   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.492633   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.497589   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:24.498570   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:24.498570   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.498570   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.498570   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.504125   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:24.504664   12440 pod_ready.go:92] pod "etcd-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:24.504814   12440 pod_ready.go:81] duration metric: took 2.0184399s for pod "etcd-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:24.504885   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:24.504989   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100
	I0610 11:09:24.504989   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.504989   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.505051   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.510658   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:24.511997   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:24.511997   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.511997   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.511997   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.516632   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:24.516632   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:24.516632   12440 pod_ready.go:81] duration metric: took 11.7469ms for pod "kube-apiserver-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:24.516632   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:24.631908   12440 request.go:629] Waited for 114.1279ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:24.632101   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:24.632101   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.632101   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.632155   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.636072   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:24.835240   12440 request.go:629] Waited for 197.3151ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:24.835442   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:24.835442   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.835442   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.835442   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.841236   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:25.038242   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:25.038361   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:25.038361   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:25.038361   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:25.044066   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:25.241512   12440 request.go:629] Waited for 195.7883ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:25.241512   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:25.241512   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:25.241512   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:25.241512   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:25.247902   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:25.520825   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:25.520825   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:25.520825   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:25.520825   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:25.526612   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:25.630261   12440 request.go:629] Waited for 102.4279ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:25.630483   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:25.630483   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:25.630483   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:25.630483   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:25.639171   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:09:26.020666   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:26.020666   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:26.020666   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:26.020666   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:26.026576   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:26.036565   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:26.036565   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:26.036565   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:26.036565   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:26.041394   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:26.521768   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:26.521903   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:26.521903   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:26.521903   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:26.537341   12440 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 11:09:26.538454   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:26.538509   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:26.538509   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:26.538542   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:26.542643   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:26.542643   12440 pod_ready.go:102] pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace has status "Ready":"False"
	I0610 11:09:27.022950   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:27.023259   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.023259   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.023259   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.029548   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:27.033959   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:27.033959   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.033959   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.033959   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.040011   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:27.040011   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:27.040553   12440 pod_ready.go:81] duration metric: took 2.5238994s for pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.040553   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.040726   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100
	I0610 11:09:27.040726   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.040726   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.040726   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.046831   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:27.230680   12440 request.go:629] Waited for 182.3585ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:27.230933   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:27.230933   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.231004   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.231004   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.236183   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:27.236740   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:27.236740   12440 pod_ready.go:81] duration metric: took 196.1854ms for pod "kube-controller-manager-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.236740   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.435687   12440 request.go:629] Waited for 198.6495ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m02
	I0610 11:09:27.436037   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m02
	I0610 11:09:27.436037   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.436037   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.436037   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.441978   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:27.638174   12440 request.go:629] Waited for 195.0451ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:27.638440   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:27.638505   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.638505   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.638505   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.646113   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:09:27.646473   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:27.646473   12440 pod_ready.go:81] duration metric: took 409.7302ms for pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.646473   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2j65l" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.828796   12440 request.go:629] Waited for 182.1216ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2j65l
	I0610 11:09:27.829019   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2j65l
	I0610 11:09:27.829019   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.829019   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.829019   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.837564   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:09:28.033108   12440 request.go:629] Waited for 194.3664ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:28.033108   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:28.033108   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.033108   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.033108   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.041736   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:09:28.041848   12440 pod_ready.go:92] pod "kube-proxy-2j65l" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:28.041848   12440 pod_ready.go:81] duration metric: took 395.3716ms for pod "kube-proxy-2j65l" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.041848   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2mwxs" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.236904   12440 request.go:629] Waited for 195.0542ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mwxs
	I0610 11:09:28.237012   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mwxs
	I0610 11:09:28.237131   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.237131   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.237131   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.243182   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:28.441001   12440 request.go:629] Waited for 196.3973ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:28.441319   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:28.441319   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.441319   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.441319   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.447670   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:28.448111   12440 pod_ready.go:92] pod "kube-proxy-2mwxs" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:28.448111   12440 pod_ready.go:81] duration metric: took 406.2595ms for pod "kube-proxy-2mwxs" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.448111   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.628076   12440 request.go:629] Waited for 179.9635ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100
	I0610 11:09:28.628246   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100
	I0610 11:09:28.628246   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.628246   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.628246   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.633957   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:28.831371   12440 request.go:629] Waited for 195.972ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:28.831811   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:28.831855   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.831855   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.831855   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.837690   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:28.839448   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:28.839448   12440 pod_ready.go:81] duration metric: took 391.334ms for pod "kube-scheduler-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.839548   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:29.038376   12440 request.go:629] Waited for 197.7953ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m02
	I0610 11:09:29.038376   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m02
	I0610 11:09:29.038376   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.038376   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.038376   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.050723   12440 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0610 11:09:29.240927   12440 request.go:629] Waited for 189.2434ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:29.241012   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:29.241101   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.241135   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.241135   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.246417   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:29.248103   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:29.248103   12440 pod_ready.go:81] duration metric: took 408.5514ms for pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:29.248103   12440 pod_ready.go:38] duration metric: took 6.8130992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:09:29.248103   12440 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:09:29.260711   12440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:09:29.298319   12440 api_server.go:72] duration metric: took 13.4362301s to wait for apiserver process to appear ...
	I0610 11:09:29.298319   12440 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:09:29.298319   12440 api_server.go:253] Checking apiserver healthz at https://172.17.146.64:8443/healthz ...
	I0610 11:09:29.308542   12440 api_server.go:279] https://172.17.146.64:8443/healthz returned 200:
	ok
	I0610 11:09:29.308899   12440 round_trippers.go:463] GET https://172.17.146.64:8443/version
	I0610 11:09:29.308993   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.308993   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.308993   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.310351   12440 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 11:09:29.310582   12440 api_server.go:141] control plane version: v1.30.1
	I0610 11:09:29.310702   12440 api_server.go:131] duration metric: took 12.3827ms to wait for apiserver health ...
	I0610 11:09:29.310702   12440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:09:29.428188   12440 request.go:629] Waited for 117.2244ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:29.428188   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:29.428188   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.428188   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.428188   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.439791   12440 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 11:09:29.449427   12440 system_pods.go:59] 17 kube-system pods found
	I0610 11:09:29.449427   12440 system_pods.go:61] "coredns-7db6d8ff4d-2jsrh" [eec90043-8c22-4041-a178-266148b8368e] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "coredns-7db6d8ff4d-dl8r2" [39350017-f3e1-44ea-a786-c03ee7a0fd8e] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "etcd-ha-368100" [a8a99351-89b1-4e87-a251-e8735df617cc] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "etcd-ha-368100-m02" [fa26841e-b79d-483a-b723-3654fde31626] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kindnet-g66bp" [aeebb510-5026-4062-95d8-be966524f934] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kindnet-qk4fv" [3687f8c4-d986-4023-a2ad-98aa6d4ddd15] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-apiserver-ha-368100" [60620b18-7050-463c-b761-9d89caea2869] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-apiserver-ha-368100-m02" [b0105503-1e6b-4d83-a2ff-c921f7916ceb] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-controller-manager-ha-368100" [a1e4d3d6-ff46-4f52-b5ff-fdad20389b34] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-controller-manager-ha-368100-m02" [18ffec1a-6bb3-4236-98f4-88e03d83516b] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-proxy-2j65l" [dfd9f031-9a9e-46fc-ad2f-b0d61e7d7034] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-proxy-2mwxs" [4ba43598-8c67-43cc-b17a-7d7fbd835edc] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-scheduler-ha-368100" [ac6c4d94-e6c2-4e43-b8ea-7819597ff572] Running
	I0610 11:09:29.452078   12440 system_pods.go:61] "kube-scheduler-ha-368100-m02" [3d706715-7b39-4a07-ad0d-2e91b0476ac7] Running
	I0610 11:09:29.452078   12440 system_pods.go:61] "kube-vip-ha-368100" [fbd9ab1c-c5b6-4b14-b4a7-8da5a58285b4] Running
	I0610 11:09:29.452078   12440 system_pods.go:61] "kube-vip-ha-368100-m02" [2e64f8be-5d5f-41ba-b4c8-9f3623e9efc6] Running
	I0610 11:09:29.452078   12440 system_pods.go:61] "storage-provisioner" [853aab4d-2671-43fd-a221-0966d875b568] Running
	I0610 11:09:29.452078   12440 system_pods.go:74] duration metric: took 141.2633ms to wait for pod list to return data ...
	I0610 11:09:29.452078   12440 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:09:29.629239   12440 request.go:629] Waited for 176.826ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/default/serviceaccounts
	I0610 11:09:29.629239   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/default/serviceaccounts
	I0610 11:09:29.629239   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.629239   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.629239   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.634712   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:29.635132   12440 default_sa.go:45] found service account: "default"
	I0610 11:09:29.635222   12440 default_sa.go:55] duration metric: took 183.1417ms for default service account to be created ...
	I0610 11:09:29.635222   12440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:09:29.832138   12440 request.go:629] Waited for 196.5758ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:29.832387   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:29.832387   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.832387   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.832387   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.841140   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:09:29.848577   12440 system_pods.go:86] 17 kube-system pods found
	I0610 11:09:29.848577   12440 system_pods.go:89] "coredns-7db6d8ff4d-2jsrh" [eec90043-8c22-4041-a178-266148b8368e] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "coredns-7db6d8ff4d-dl8r2" [39350017-f3e1-44ea-a786-c03ee7a0fd8e] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "etcd-ha-368100" [a8a99351-89b1-4e87-a251-e8735df617cc] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "etcd-ha-368100-m02" [fa26841e-b79d-483a-b723-3654fde31626] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kindnet-g66bp" [aeebb510-5026-4062-95d8-be966524f934] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kindnet-qk4fv" [3687f8c4-d986-4023-a2ad-98aa6d4ddd15] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kube-apiserver-ha-368100" [60620b18-7050-463c-b761-9d89caea2869] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kube-apiserver-ha-368100-m02" [b0105503-1e6b-4d83-a2ff-c921f7916ceb] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kube-controller-manager-ha-368100" [a1e4d3d6-ff46-4f52-b5ff-fdad20389b34] Running
	I0610 11:09:29.849128   12440 system_pods.go:89] "kube-controller-manager-ha-368100-m02" [18ffec1a-6bb3-4236-98f4-88e03d83516b] Running
	I0610 11:09:29.849128   12440 system_pods.go:89] "kube-proxy-2j65l" [dfd9f031-9a9e-46fc-ad2f-b0d61e7d7034] Running
	I0610 11:09:29.849128   12440 system_pods.go:89] "kube-proxy-2mwxs" [4ba43598-8c67-43cc-b17a-7d7fbd835edc] Running
	I0610 11:09:29.849128   12440 system_pods.go:89] "kube-scheduler-ha-368100" [ac6c4d94-e6c2-4e43-b8ea-7819597ff572] Running
	I0610 11:09:29.849200   12440 system_pods.go:89] "kube-scheduler-ha-368100-m02" [3d706715-7b39-4a07-ad0d-2e91b0476ac7] Running
	I0610 11:09:29.849232   12440 system_pods.go:89] "kube-vip-ha-368100" [fbd9ab1c-c5b6-4b14-b4a7-8da5a58285b4] Running
	I0610 11:09:29.849232   12440 system_pods.go:89] "kube-vip-ha-368100-m02" [2e64f8be-5d5f-41ba-b4c8-9f3623e9efc6] Running
	I0610 11:09:29.849232   12440 system_pods.go:89] "storage-provisioner" [853aab4d-2671-43fd-a221-0966d875b568] Running
	I0610 11:09:29.849232   12440 system_pods.go:126] duration metric: took 214.009ms to wait for k8s-apps to be running ...
	I0610 11:09:29.849232   12440 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:09:29.860301   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:09:29.898063   12440 system_svc.go:56] duration metric: took 48.83ms WaitForService to wait for kubelet
	I0610 11:09:29.898063   12440 kubeadm.go:576] duration metric: took 14.0359687s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:09:29.898063   12440 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:09:30.038515   12440 request.go:629] Waited for 140.4513ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes
	I0610 11:09:30.038515   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes
	I0610 11:09:30.038515   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:30.038515   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:30.038515   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:30.044181   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:30.044181   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:09:30.044181   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:09:30.044181   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:09:30.044181   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:09:30.044181   12440 node_conditions.go:105] duration metric: took 146.1173ms to run NodePressure ...
	I0610 11:09:30.044181   12440 start.go:240] waiting for startup goroutines ...
	I0610 11:09:30.044181   12440 start.go:254] writing updated cluster config ...
	I0610 11:09:30.059297   12440 out.go:177] 
	I0610 11:09:30.074159   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:09:30.074159   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:09:30.085248   12440 out.go:177] * Starting "ha-368100-m03" control-plane node in "ha-368100" cluster
	I0610 11:09:30.088734   12440 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 11:09:30.088880   12440 cache.go:56] Caching tarball of preloaded images
	I0610 11:09:30.088934   12440 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 11:09:30.088934   12440 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 11:09:30.089484   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:09:30.094469   12440 start.go:360] acquireMachinesLock for ha-368100-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:09:30.095385   12440 start.go:364] duration metric: took 916.8µs to acquireMachinesLock for "ha-368100-m03"
	I0610 11:09:30.095607   12440 start.go:93] Provisioning new machine with config: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:09:30.095660   12440 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0610 11:09:30.096519   12440 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 11:09:30.096519   12440 start.go:159] libmachine.API.Create for "ha-368100" (driver="hyperv")
	I0610 11:09:30.096519   12440 client.go:168] LocalClient.Create starting
	I0610 11:09:30.096519   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 11:09:30.096519   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:09:30.096519   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:09:30.099369   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 11:09:30.099648   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:09:30.099648   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:09:30.099648   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 11:09:32.180342   12440 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 11:09:32.180342   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:32.180342   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 11:09:34.071218   12440 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 11:09:34.071218   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:34.071218   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:09:35.662436   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:09:35.662436   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:35.662436   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:09:39.827039   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:09:39.827424   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:39.829699   12440 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 11:09:40.298086   12440 main.go:141] libmachine: Creating SSH key...
	I0610 11:09:40.684623   12440 main.go:141] libmachine: Creating VM...
	I0610 11:09:40.684751   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:09:43.947183   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:09:43.947267   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:43.947267   12440 main.go:141] libmachine: Using switch "Default Switch"
	I0610 11:09:43.947454   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:09:45.828869   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:09:45.829091   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:45.829091   12440 main.go:141] libmachine: Creating VHD
	I0610 11:09:45.829175   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 11:09:49.945116   12440 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 36A15D03-6AD9-4444-AE99-6FBEB781697A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 11:09:49.945116   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:49.945116   12440 main.go:141] libmachine: Writing magic tar header
	I0610 11:09:49.945528   12440 main.go:141] libmachine: Writing SSH key tar header
	I0610 11:09:49.956926   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 11:09:53.341238   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:09:53.341238   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:53.341238   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\disk.vhd' -SizeBytes 20000MB
	I0610 11:09:56.045014   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:09:56.045313   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:56.045368   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-368100-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 11:10:00.102297   12440 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-368100-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 11:10:00.102748   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:00.102748   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-368100-m03 -DynamicMemoryEnabled $false
	I0610 11:10:02.581366   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:02.581366   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:02.581636   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-368100-m03 -Count 2
	I0610 11:10:04.996432   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:04.996432   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:04.996515   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-368100-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\boot2docker.iso'
	I0610 11:10:07.844473   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:07.844669   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:07.844733   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-368100-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\disk.vhd'
	I0610 11:10:10.836037   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:10.836811   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:10.836811   12440 main.go:141] libmachine: Starting VM...
	I0610 11:10:10.836811   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-368100-m03
	I0610 11:10:14.134593   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:14.134655   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:14.134655   12440 main.go:141] libmachine: Waiting for host to start...
	I0610 11:10:14.134655   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:16.621463   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:16.621538   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:16.621609   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:19.363799   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:19.363799   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:20.373633   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:22.762313   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:22.762591   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:22.762591   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:25.553872   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:25.553872   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:26.559962   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:28.937268   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:28.937480   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:28.937480   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:31.714054   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:31.714054   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:32.724650   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:35.086562   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:35.086562   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:35.086903   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:37.872500   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:37.872500   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:38.877778   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:41.347406   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:41.347406   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:41.347406   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:44.206960   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:10:44.206998   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:44.207121   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:46.533954   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:46.533954   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:46.533954   12440 machine.go:94] provisionDockerMachine start ...
	I0610 11:10:46.534303   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:48.934389   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:48.934523   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:48.934523   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:51.748659   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:10:51.748659   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:51.754561   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:10:51.766191   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:10:51.766191   12440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:10:51.914205   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:10:51.914313   12440 buildroot.go:166] provisioning hostname "ha-368100-m03"
	I0610 11:10:51.914382   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:54.224593   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:54.224661   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:54.224661   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:57.001782   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:10:57.002110   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:57.007792   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:10:57.008484   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:10:57.008484   12440 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-368100-m03 && echo "ha-368100-m03" | sudo tee /etc/hostname
	I0610 11:10:57.194149   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-368100-m03
	
	I0610 11:10:57.194149   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:59.529733   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:59.530426   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:59.530426   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:02.315293   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:02.315293   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:02.322600   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:02.323073   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:02.323073   12440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-368100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-368100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-368100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:11:02.490473   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:11:02.490613   12440 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 11:11:02.490613   12440 buildroot.go:174] setting up certificates
	I0610 11:11:02.490673   12440 provision.go:84] configureAuth start
	I0610 11:11:02.490829   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:04.805513   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:04.805513   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:04.805513   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:07.589202   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:07.589396   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:07.589396   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:09.933307   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:09.934302   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:09.934302   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:12.810303   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:12.810303   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:12.810303   12440 provision.go:143] copyHostCerts
	I0610 11:11:12.811337   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 11:11:12.811337   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 11:11:12.811337   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 11:11:12.812336   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 11:11:12.813745   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 11:11:12.814000   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 11:11:12.814000   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 11:11:12.814457   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 11:11:12.815607   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 11:11:12.815830   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 11:11:12.815830   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 11:11:12.816372   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 11:11:12.817281   12440 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-368100-m03 san=[127.0.0.1 172.17.144.162 ha-368100-m03 localhost minikube]
	I0610 11:11:13.318101   12440 provision.go:177] copyRemoteCerts
	I0610 11:11:13.331511   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:11:13.331617   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:15.647377   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:15.647377   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:15.647377   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:18.521673   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:18.522910   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:18.522910   12440 sshutil.go:53] new ssh client: &{IP:172.17.144.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\id_rsa Username:docker}
	I0610 11:11:18.641847   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.3102925s)
	I0610 11:11:18.641847   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 11:11:18.642416   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:11:18.696601   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 11:11:18.697075   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:11:18.758128   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 11:11:18.759272   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 11:11:18.828807   12440 provision.go:87] duration metric: took 16.338s to configureAuth
	I0610 11:11:18.828880   12440 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:11:18.829555   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:11:18.829555   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:21.385335   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:21.386005   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:21.386005   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:24.358797   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:24.359848   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:24.365707   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:24.365707   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:24.365707   12440 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 11:11:24.515166   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 11:11:24.515166   12440 buildroot.go:70] root file system type: tmpfs
	I0610 11:11:24.515542   12440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 11:11:24.515628   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:26.858873   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:26.858873   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:26.858873   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:29.705890   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:29.705890   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:29.711210   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:29.711259   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:29.711259   12440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.146.64"
	Environment="NO_PROXY=172.17.146.64,172.17.157.100"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 11:11:29.888767   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.146.64
	Environment=NO_PROXY=172.17.146.64,172.17.157.100
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 11:11:29.888860   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:32.274969   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:32.274969   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:32.275971   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:35.140143   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:35.140143   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:35.145372   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:35.146637   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:35.146637   12440 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 11:11:37.402363   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 11:11:37.403353   12440 machine.go:97] duration metric: took 50.8689798s to provisionDockerMachine
	I0610 11:11:37.403353   12440 client.go:171] duration metric: took 2m7.3057803s to LocalClient.Create
	I0610 11:11:37.403353   12440 start.go:167] duration metric: took 2m7.3057803s to libmachine.API.Create "ha-368100"
	I0610 11:11:37.403353   12440 start.go:293] postStartSetup for "ha-368100-m03" (driver="hyperv")
	I0610 11:11:37.403353   12440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:11:37.415362   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:11:37.415362   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:39.810121   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:39.810121   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:39.810121   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:42.635993   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:42.635993   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:42.636340   12440 sshutil.go:53] new ssh client: &{IP:172.17.144.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\id_rsa Username:docker}
	I0610 11:11:42.770715   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.355245s)
	I0610 11:11:42.785243   12440 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:11:42.793020   12440 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:11:42.793020   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 11:11:42.793603   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 11:11:42.794393   12440 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 11:11:42.794393   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 11:11:42.808344   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:11:42.828145   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 11:11:42.877014   12440 start.go:296] duration metric: took 5.4735458s for postStartSetup
	I0610 11:11:42.880148   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:45.228888   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:45.228888   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:45.229590   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:48.077463   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:48.077463   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:48.078586   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:11:48.081366   12440 start.go:128] duration metric: took 2m17.9845662s to createHost
	I0610 11:11:48.081472   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:50.443719   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:50.443719   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:50.443785   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:53.232078   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:53.232078   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:53.238517   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:53.239145   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:53.239145   12440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:11:53.383688   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718017913.388182023
	
	I0610 11:11:53.383756   12440 fix.go:216] guest clock: 1718017913.388182023
	I0610 11:11:53.383756   12440 fix.go:229] Guest: 2024-06-10 11:11:53.388182023 +0000 UTC Remote: 2024-06-10 11:11:48.0813667 +0000 UTC m=+591.222235301 (delta=5.306815323s)
	I0610 11:11:53.383835   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:55.692516   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:55.693383   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:55.693488   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:58.472417   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:58.472417   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:58.477395   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:58.478142   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:58.478142   12440 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718017913
	I0610 11:11:58.629174   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 11:11:53 UTC 2024
	
	I0610 11:11:58.629174   12440 fix.go:236] clock set: Mon Jun 10 11:11:53 UTC 2024
	 (err=<nil>)
	I0610 11:11:58.629303   12440 start.go:83] releasing machines lock for "ha-368100-m03", held for 2m28.5325619s
	I0610 11:11:58.629600   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:12:00.958228   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:00.958228   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:00.958890   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:03.702052   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:12:03.702052   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:03.710713   12440 out.go:177] * Found network options:
	I0610 11:12:03.713491   12440 out.go:177]   - NO_PROXY=172.17.146.64,172.17.157.100
	W0610 11:12:03.715785   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:12:03.715870   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 11:12:03.717467   12440 out.go:177]   - NO_PROXY=172.17.146.64,172.17.157.100
	W0610 11:12:03.720499   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:12:03.720557   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:12:03.720837   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:12:03.720837   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 11:12:03.723965   12440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:12:03.724139   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:12:03.733980   12440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 11:12:03.733980   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:12:06.094233   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:06.094233   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:06.094233   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:06.108405   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:06.108405   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:06.108405   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:09.088715   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:12:09.089415   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:09.089415   12440 sshutil.go:53] new ssh client: &{IP:172.17.144.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\id_rsa Username:docker}
	I0610 11:12:09.115193   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:12:09.115193   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:09.115937   12440 sshutil.go:53] new ssh client: &{IP:172.17.144.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\id_rsa Username:docker}
	I0610 11:12:09.196966   12440 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.4626799s)
	W0610 11:12:09.196966   12440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:12:09.209614   12440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:12:09.273391   12440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:12:09.273391   12440 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5493808s)
	I0610 11:12:09.273391   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:12:09.273816   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:12:09.329501   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 11:12:09.370970   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 11:12:09.392972   12440 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 11:12:09.403990   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 11:12:09.442091   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:12:09.478941   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 11:12:09.510983   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:12:09.543967   12440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:12:09.577023   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 11:12:09.615821   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 11:12:09.651203   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 11:12:09.685581   12440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:12:09.716822   12440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:12:09.750433   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:09.975274   12440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 11:12:10.009671   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:12:10.022200   12440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 11:12:10.063253   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:12:10.106501   12440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:12:10.158449   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:12:10.196998   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:12:10.238070   12440 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 11:12:10.309214   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:12:10.337069   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:12:10.391068   12440 ssh_runner.go:195] Run: which cri-dockerd
	I0610 11:12:10.411614   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 11:12:10.434562   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 11:12:10.489437   12440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 11:12:10.721780   12440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 11:12:10.945755   12440 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 11:12:10.945858   12440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 11:12:10.997291   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:11.237905   12440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 11:12:13.793102   12440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5551761s)
	I0610 11:12:13.805526   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 11:12:13.845720   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:12:13.884409   12440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 11:12:14.136064   12440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 11:12:14.352074   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:14.582162   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 11:12:14.627523   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:12:14.667868   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:14.881187   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 11:12:15.003486   12440 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 11:12:15.015749   12440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 11:12:15.024529   12440 start.go:562] Will wait 60s for crictl version
	I0610 11:12:15.037729   12440 ssh_runner.go:195] Run: which crictl
	I0610 11:12:15.057081   12440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:12:15.112655   12440 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 11:12:15.122689   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:12:15.170298   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:12:15.210189   12440 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 11:12:15.214176   12440 out.go:177]   - env NO_PROXY=172.17.146.64
	I0610 11:12:15.217176   12440 out.go:177]   - env NO_PROXY=172.17.146.64,172.17.157.100
	I0610 11:12:15.219169   12440 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 11:12:15.223169   12440 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 11:12:15.223169   12440 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 11:12:15.223169   12440 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 11:12:15.223169   12440 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 11:12:15.226260   12440 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 11:12:15.227196   12440 ip.go:210] interface addr: 172.17.144.1/20
	I0610 11:12:15.238175   12440 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 11:12:15.244185   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:12:15.269098   12440 mustload.go:65] Loading cluster: ha-368100
	I0610 11:12:15.270315   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:12:15.271173   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:12:17.556091   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:17.556091   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:17.556220   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:12:17.556833   12440 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100 for IP: 172.17.144.162
	I0610 11:12:17.556833   12440 certs.go:194] generating shared ca certs ...
	I0610 11:12:17.556833   12440 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:12:17.557506   12440 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 11:12:17.557779   12440 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 11:12:17.557779   12440 certs.go:256] generating profile certs ...
	I0610 11:12:17.559083   12440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key
	I0610 11:12:17.559191   12440 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.2e196afa
	I0610 11:12:17.559191   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.2e196afa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.146.64 172.17.157.100 172.17.144.162 172.17.159.254]
	I0610 11:12:17.830488   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.2e196afa ...
	I0610 11:12:17.830488   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.2e196afa: {Name:mk25ca56d579241f53857bc22bf805a9fa61f24e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:12:17.831491   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.2e196afa ...
	I0610 11:12:17.831491   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.2e196afa: {Name:mka09d243d5408e78ceb058be8a57ca5fbce04b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:12:17.832188   12440 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.2e196afa -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt
	I0610 11:12:17.846369   12440 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.2e196afa -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key
	I0610 11:12:17.847016   12440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key
	I0610 11:12:17.847016   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 11:12:17.848030   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 11:12:17.848271   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 11:12:17.848371   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 11:12:17.848556   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 11:12:17.848692   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 11:12:17.848939   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 11:12:17.849257   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 11:12:17.849424   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 11:12:17.849931   12440 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 11:12:17.849969   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 11:12:17.850208   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 11:12:17.850208   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 11:12:17.850208   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 11:12:17.850208   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 11:12:17.851176   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 11:12:17.851384   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:12:17.851415   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 11:12:17.851415   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:12:20.197038   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:20.197380   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:20.197474   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:23.136817   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:12:23.137697   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:23.138433   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:12:23.251098   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0610 11:12:23.261572   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0610 11:12:23.304021   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0610 11:12:23.312110   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0610 11:12:23.349942   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0610 11:12:23.358147   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0610 11:12:23.395193   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0610 11:12:23.403052   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0610 11:12:23.438585   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0610 11:12:23.446514   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0610 11:12:23.489915   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0610 11:12:23.498390   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0610 11:12:23.522298   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:12:23.584640   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:12:23.646900   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:12:23.703148   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 11:12:23.755272   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0610 11:12:23.813130   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:12:23.866528   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:12:23.921960   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:12:23.976111   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 11:12:24.036437   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:12:24.102185   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 11:12:24.158119   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0610 11:12:24.194318   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0610 11:12:24.244108   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0610 11:12:24.282521   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0610 11:12:24.319516   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0610 11:12:24.363765   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0610 11:12:24.402404   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0610 11:12:24.451217   12440 ssh_runner.go:195] Run: openssl version
	I0610 11:12:24.476290   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:12:24.514386   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:12:24.525114   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:12:24.539107   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:12:24.572462   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:12:24.615093   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 11:12:24.650240   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 11:12:24.660081   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 11:12:24.671137   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 11:12:24.697949   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 11:12:24.732144   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 11:12:24.771909   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 11:12:24.784352   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 11:12:24.800410   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 11:12:24.825359   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:12:24.863145   12440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:12:24.871378   12440 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 11:12:24.871544   12440 kubeadm.go:928] updating node {m03 172.17.144.162 8443 v1.30.1 docker true true} ...
	I0610 11:12:24.871789   12440 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-368100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.144.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:12:24.871908   12440 kube-vip.go:115] generating kube-vip config ...
	I0610 11:12:24.885822   12440 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 11:12:24.921007   12440 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 11:12:24.921254   12440 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 11:12:24.934674   12440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:12:24.953081   12440 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 11:12:24.965824   12440 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 11:12:24.991728   12440 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0610 11:12:24.991728   12440 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0610 11:12:24.991728   12440 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 11:12:24.991728   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:12:24.991728   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:12:25.007703   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:12:25.008559   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:12:25.009060   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:12:25.015252   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 11:12:25.015252   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 11:12:25.079144   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 11:12:25.079250   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:12:25.079250   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 11:12:25.096930   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:12:25.151060   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 11:12:25.151060   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 11:12:26.591358   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0610 11:12:26.609413   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0610 11:12:26.643730   12440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:12:26.676842   12440 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 11:12:26.725172   12440 ssh_runner.go:195] Run: grep 172.17.159.254	control-plane.minikube.internal$ /etc/hosts
	I0610 11:12:26.731960   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:12:26.771015   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:27.004186   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:12:27.046029   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:12:27.090129   12440 start.go:316] joinCluster: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.144.162 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:12:27.090129   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 11:12:27.091159   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:12:29.407950   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:29.408346   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:29.408346   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:32.241337   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:12:32.241337   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:32.241793   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:12:32.485996   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3957235s)
	I0610 11:12:32.485996   12440 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.144.162 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:12:32.486102   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gohj1v.zzhcqgoek2436t6x --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-368100-m03 --control-plane --apiserver-advertise-address=172.17.144.162 --apiserver-bind-port=8443"
	I0610 11:13:17.864991   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gohj1v.zzhcqgoek2436t6x --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-368100-m03 --control-plane --apiserver-advertise-address=172.17.144.162 --apiserver-bind-port=8443": (45.3785172s)
	I0610 11:13:17.864991   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 11:13:18.865165   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.000165s)
	I0610 11:13:18.882036   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-368100-m03 minikube.k8s.io/updated_at=2024_06_10T11_13_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-368100 minikube.k8s.io/primary=false
	I0610 11:13:19.059048   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-368100-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0610 11:13:19.405718   12440 start.go:318] duration metric: took 52.3151597s to joinCluster
	I0610 11:13:19.405718   12440 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.17.144.162 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:13:19.410214   12440 out.go:177] * Verifying Kubernetes components...
	I0610 11:13:19.406974   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:13:19.430738   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:13:19.921308   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:13:19.956625   12440 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:13:19.957617   12440 kapi.go:59] client config for ha-368100: &rest.Config{Host:"https://172.17.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0610 11:13:19.957797   12440 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.159.254:8443 with https://172.17.146.64:8443
	I0610 11:13:19.958752   12440 node_ready.go:35] waiting up to 6m0s for node "ha-368100-m03" to be "Ready" ...
	I0610 11:13:19.958932   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:19.958932   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:19.958991   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:19.958991   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:19.974304   12440 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0610 11:13:20.461552   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:20.461552   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:20.461552   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:20.461552   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:20.466612   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:20.967568   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:20.967568   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:20.967818   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:20.967818   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:20.974178   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:21.470747   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:21.470747   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:21.470747   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:21.470747   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:21.475524   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:21.959429   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:21.959582   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:21.959582   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:21.959582   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:21.970086   12440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 11:13:21.970086   12440 node_ready.go:53] node "ha-368100-m03" has status "Ready":"False"
	I0610 11:13:22.466468   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:22.466468   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:22.466468   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:22.466468   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:22.471505   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:22.974548   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:22.974650   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:22.974683   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:22.974683   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:22.979314   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:23.463539   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:23.463539   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:23.463539   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:23.463539   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:23.472712   12440 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:13:23.971089   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:23.971089   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:23.971089   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:23.971089   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:24.258484   12440 round_trippers.go:574] Response Status: 200 OK in 287 milliseconds
	I0610 11:13:24.259145   12440 node_ready.go:53] node "ha-368100-m03" has status "Ready":"False"
	I0610 11:13:24.471303   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:24.471303   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:24.471303   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:24.471303   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:24.476399   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:24.960362   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:24.960552   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:24.960552   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:24.960552   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:24.965584   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:25.469837   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:25.469907   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:25.469907   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:25.469907   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:25.474194   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:25.971869   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:25.971934   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:25.971934   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:25.971934   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:25.978438   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:26.472556   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:26.472618   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:26.472618   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:26.472618   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:26.477581   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:26.478195   12440 node_ready.go:53] node "ha-368100-m03" has status "Ready":"False"
	I0610 11:13:26.959578   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:26.959578   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:26.959578   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:26.959731   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:26.965147   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:27.461681   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:27.461927   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:27.461927   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:27.461927   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:27.469793   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:13:27.962882   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:27.963018   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:27.963083   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:27.963083   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:27.979430   12440 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0610 11:13:28.461696   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:28.461771   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:28.461826   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:28.461826   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:28.467657   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:28.961855   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:28.961855   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:28.961855   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:28.961855   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:28.966439   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:28.967601   12440 node_ready.go:53] node "ha-368100-m03" has status "Ready":"False"
	I0610 11:13:29.460401   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:29.460401   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:29.460401   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:29.460401   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:29.466165   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:29.962951   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:29.962951   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:29.963101   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:29.963101   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:29.967519   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:30.465128   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:30.465358   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.465358   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.465358   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.476127   12440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 11:13:30.477470   12440 node_ready.go:49] node "ha-368100-m03" has status "Ready":"True"
	I0610 11:13:30.477522   12440 node_ready.go:38] duration metric: took 10.5186832s for node "ha-368100-m03" to be "Ready" ...
	I0610 11:13:30.477522   12440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:13:30.477640   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:30.477696   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.477696   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.477696   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.488917   12440 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 11:13:30.500005   12440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.500099   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2jsrh
	I0610 11:13:30.500099   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.500099   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.500099   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.503829   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:13:30.505213   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:30.505213   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.505394   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.505394   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.511536   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:30.512107   12440 pod_ready.go:92] pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:30.512138   12440 pod_ready.go:81] duration metric: took 12.0389ms for pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.512205   12440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.512300   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dl8r2
	I0610 11:13:30.512338   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.512338   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.512338   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.516425   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:30.518048   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:30.518145   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.518145   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.518145   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.521458   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:13:30.522466   12440 pod_ready.go:92] pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:30.522466   12440 pod_ready.go:81] duration metric: took 10.2607ms for pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.522466   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.522466   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100
	I0610 11:13:30.522466   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.522466   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.522466   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.528692   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:30.529229   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:30.529229   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.529229   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.529229   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.532904   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:13:30.533976   12440 pod_ready.go:92] pod "etcd-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:30.533976   12440 pod_ready.go:81] duration metric: took 11.5096ms for pod "etcd-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.533976   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.534565   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:13:30.534565   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.534638   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.534638   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.541358   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:30.544187   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:30.544255   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.544255   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.544328   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.556527   12440 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0610 11:13:30.557664   12440 pod_ready.go:92] pod "etcd-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:30.557763   12440 pod_ready.go:81] duration metric: took 23.7865ms for pod "etcd-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.557763   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.668086   12440 request.go:629] Waited for 110.0843ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:30.668156   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:30.668156   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.668156   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.668337   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.672712   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:30.873594   12440 request.go:629] Waited for 199.688ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:30.873594   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:30.873594   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.873594   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.873594   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.887647   12440 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 11:13:31.078322   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:31.078476   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:31.078476   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:31.078476   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:31.083295   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:31.267720   12440 request.go:629] Waited for 183.1273ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:31.267720   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:31.267720   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:31.267720   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:31.267720   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:31.275303   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:13:31.564591   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:31.564759   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:31.564759   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:31.564759   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:31.572396   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:13:31.675017   12440 request.go:629] Waited for 101.3926ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:31.675122   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:31.675122   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:31.675122   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:31.675122   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:31.680026   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:32.068886   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:32.068886   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.068998   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.068998   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.075539   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:32.076718   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:32.076718   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.076821   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.076821   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.081000   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:32.081000   12440 pod_ready.go:92] pod "etcd-ha-368100-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:32.081000   12440 pod_ready.go:81] duration metric: took 1.5232246s for pod "etcd-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.081000   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.273706   12440 request.go:629] Waited for 192.4855ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100
	I0610 11:13:32.273811   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100
	I0610 11:13:32.273860   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.273860   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.273860   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.279553   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:32.479921   12440 request.go:629] Waited for 198.7924ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:32.479964   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:32.480147   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.480147   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.480400   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.484735   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:32.485867   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:32.485867   12440 pod_ready.go:81] duration metric: took 404.8635ms for pod "kube-apiserver-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.485867   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.672602   12440 request.go:629] Waited for 186.7336ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:13:32.672835   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:13:32.672835   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.672835   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.672835   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.677676   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:32.874631   12440 request.go:629] Waited for 194.9759ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:32.874859   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:32.874914   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.874914   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.874914   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.880303   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:32.881409   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:32.881485   12440 pod_ready.go:81] duration metric: took 395.6152ms for pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.881485   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.076288   12440 request.go:629] Waited for 194.7192ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m03
	I0610 11:13:33.076288   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m03
	I0610 11:13:33.076288   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.076288   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.076288   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.081498   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:33.267076   12440 request.go:629] Waited for 185.073ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:33.267514   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:33.267514   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.267514   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.267618   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.276259   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:13:33.277539   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:33.277712   12440 pod_ready.go:81] duration metric: took 396.2238ms for pod "kube-apiserver-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.277712   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.471622   12440 request.go:629] Waited for 193.6777ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100
	I0610 11:13:33.471807   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100
	I0610 11:13:33.471807   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.471807   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.471807   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.482913   12440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 11:13:33.675070   12440 request.go:629] Waited for 191.344ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:33.675345   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:33.675345   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.675345   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.675345   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.679881   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:33.681539   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:33.681539   12440 pod_ready.go:81] duration metric: took 403.7554ms for pod "kube-controller-manager-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.681593   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.880488   12440 request.go:629] Waited for 198.5055ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m02
	I0610 11:13:33.880723   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m02
	I0610 11:13:33.880723   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.880723   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.880839   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.885125   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:34.066585   12440 request.go:629] Waited for 179.5935ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:34.066927   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:34.066927   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.066927   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.066927   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.073083   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:34.074487   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:34.074545   12440 pod_ready.go:81] duration metric: took 392.9494ms for pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.074545   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.272954   12440 request.go:629] Waited for 197.9117ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m03
	I0610 11:13:34.272954   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m03
	I0610 11:13:34.272954   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.272954   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.272954   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.279480   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:34.476789   12440 request.go:629] Waited for 195.9076ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:34.476983   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:34.476983   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.476983   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.476983   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.486699   12440 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:13:34.487715   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:34.487715   12440 pod_ready.go:81] duration metric: took 413.1666ms for pod "kube-controller-manager-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.487715   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2j65l" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.679156   12440 request.go:629] Waited for 191.2526ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2j65l
	I0610 11:13:34.679304   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2j65l
	I0610 11:13:34.679304   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.679304   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.679304   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.686710   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:13:34.865644   12440 request.go:629] Waited for 176.4181ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:34.865859   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:34.865982   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.865982   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.866066   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.871448   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:34.872913   12440 pod_ready.go:92] pod "kube-proxy-2j65l" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:34.872913   12440 pod_ready.go:81] duration metric: took 385.1945ms for pod "kube-proxy-2j65l" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.873122   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2mwxs" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.070541   12440 request.go:629] Waited for 197.358ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mwxs
	I0610 11:13:35.070996   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mwxs
	I0610 11:13:35.071109   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.071109   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.071109   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.077434   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:35.275992   12440 request.go:629] Waited for 197.5952ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:35.276310   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:35.276310   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.276310   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.276310   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.280944   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:35.282512   12440 pod_ready.go:92] pod "kube-proxy-2mwxs" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:35.282657   12440 pod_ready.go:81] duration metric: took 409.4723ms for pod "kube-proxy-2mwxs" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.282728   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pvvwh" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.478410   12440 request.go:629] Waited for 195.4767ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvvwh
	I0610 11:13:35.478725   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvvwh
	I0610 11:13:35.478764   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.478798   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.478798   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.482834   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:35.666079   12440 request.go:629] Waited for 181.6553ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:35.666335   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:35.666335   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.666442   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.666442   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.678282   12440 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 11:13:35.679335   12440 pod_ready.go:92] pod "kube-proxy-pvvwh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:35.679534   12440 pod_ready.go:81] duration metric: took 396.8033ms for pod "kube-proxy-pvvwh" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.679573   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.868218   12440 request.go:629] Waited for 188.5657ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100
	I0610 11:13:35.868476   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100
	I0610 11:13:35.868593   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.868593   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.868593   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.877933   12440 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:13:36.069058   12440 request.go:629] Waited for 190.1224ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:36.069483   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:36.069483   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.069483   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.069483   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.074736   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:36.075764   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:36.075764   12440 pod_ready.go:81] duration metric: took 396.1877ms for pod "kube-scheduler-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.075764   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.273650   12440 request.go:629] Waited for 197.5249ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m02
	I0610 11:13:36.273861   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m02
	I0610 11:13:36.273861   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.273861   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.273861   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.282565   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:13:36.474212   12440 request.go:629] Waited for 190.3924ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:36.474212   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:36.474212   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.474212   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.474212   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.480110   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:36.480798   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:36.481348   12440 pod_ready.go:81] duration metric: took 405.5808ms for pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.481564   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.675558   12440 request.go:629] Waited for 193.5616ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m03
	I0610 11:13:36.675737   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m03
	I0610 11:13:36.675737   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.675737   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.675821   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.680527   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:36.879561   12440 request.go:629] Waited for 197.1321ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:36.879795   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:36.879795   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.879795   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.879795   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.888238   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:13:36.889115   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:36.889282   12440 pod_ready.go:81] duration metric: took 407.6011ms for pod "kube-scheduler-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.889373   12440 pod_ready.go:38] duration metric: took 6.4117288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:13:36.889439   12440 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:13:36.902667   12440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:13:36.932589   12440 api_server.go:72] duration metric: took 17.5264988s to wait for apiserver process to appear ...
	I0610 11:13:36.932589   12440 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:13:36.932589   12440 api_server.go:253] Checking apiserver healthz at https://172.17.146.64:8443/healthz ...
	I0610 11:13:36.940827   12440 api_server.go:279] https://172.17.146.64:8443/healthz returned 200:
	ok
	I0610 11:13:36.941878   12440 round_trippers.go:463] GET https://172.17.146.64:8443/version
	I0610 11:13:36.941958   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.941958   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.941958   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.944356   12440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:13:36.944873   12440 api_server.go:141] control plane version: v1.30.1
	I0610 11:13:36.944873   12440 api_server.go:131] duration metric: took 12.2846ms to wait for apiserver health ...
	I0610 11:13:36.944873   12440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:13:37.067966   12440 request.go:629] Waited for 122.7294ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:37.068085   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:37.068085   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:37.068085   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:37.068085   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:37.082170   12440 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 11:13:37.093121   12440 system_pods.go:59] 24 kube-system pods found
	I0610 11:13:37.093121   12440 system_pods.go:61] "coredns-7db6d8ff4d-2jsrh" [eec90043-8c22-4041-a178-266148b8368e] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "coredns-7db6d8ff4d-dl8r2" [39350017-f3e1-44ea-a786-c03ee7a0fd8e] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "etcd-ha-368100" [a8a99351-89b1-4e87-a251-e8735df617cc] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "etcd-ha-368100-m02" [fa26841e-b79d-483a-b723-3654fde31626] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "etcd-ha-368100-m03" [e26b99db-b727-47e4-9aa8-7cd2f1a58454] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kindnet-g66bp" [aeebb510-5026-4062-95d8-be966524f934] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kindnet-n6fxd" [327dd296-b02d-4784-a971-80cee701dee0] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kindnet-qk4fv" [3687f8c4-d986-4023-a2ad-98aa6d4ddd15] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-apiserver-ha-368100" [60620b18-7050-463c-b761-9d89caea2869] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-apiserver-ha-368100-m02" [b0105503-1e6b-4d83-a2ff-c921f7916ceb] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-apiserver-ha-368100-m03" [4d3f6596-2d88-46bc-8ca1-6115e3f60dca] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-controller-manager-ha-368100" [a1e4d3d6-ff46-4f52-b5ff-fdad20389b34] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-controller-manager-ha-368100-m02" [18ffec1a-6bb3-4236-98f4-88e03d83516b] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-controller-manager-ha-368100-m03" [32925a2e-757b-4bbc-8d2d-258212289ae0] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-proxy-2j65l" [dfd9f031-9a9e-46fc-ad2f-b0d61e7d7034] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-proxy-2mwxs" [4ba43598-8c67-43cc-b17a-7d7fbd835edc] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-proxy-pvvwh" [6cc7a9ab-5235-4c3a-8184-be5b4e436320] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-scheduler-ha-368100" [ac6c4d94-e6c2-4e43-b8ea-7819597ff572] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-scheduler-ha-368100-m02" [3d706715-7b39-4a07-ad0d-2e91b0476ac7] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-scheduler-ha-368100-m03" [d0c84f6a-aae9-4c03-9d69-5b2643e0dfc1] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-vip-ha-368100" [fbd9ab1c-c5b6-4b14-b4a7-8da5a58285b4] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-vip-ha-368100-m02" [2e64f8be-5d5f-41ba-b4c8-9f3623e9efc6] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-vip-ha-368100-m03" [0482cc17-ebee-4f8f-a02d-5e39d035f7b4] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "storage-provisioner" [853aab4d-2671-43fd-a221-0966d875b568] Running
	I0610 11:13:37.093121   12440 system_pods.go:74] duration metric: took 148.2463ms to wait for pod list to return data ...
	I0610 11:13:37.093121   12440 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:13:37.271805   12440 request.go:629] Waited for 178.6825ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/default/serviceaccounts
	I0610 11:13:37.271805   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/default/serviceaccounts
	I0610 11:13:37.271805   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:37.271805   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:37.271805   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:37.276890   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:37.277983   12440 default_sa.go:45] found service account: "default"
	I0610 11:13:37.278035   12440 default_sa.go:55] duration metric: took 184.8609ms for default service account to be created ...
	I0610 11:13:37.278035   12440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:13:37.475811   12440 request.go:629] Waited for 197.534ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:37.476060   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:37.476060   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:37.476157   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:37.476157   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:37.492717   12440 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0610 11:13:37.504375   12440 system_pods.go:86] 24 kube-system pods found
	I0610 11:13:37.504375   12440 system_pods.go:89] "coredns-7db6d8ff4d-2jsrh" [eec90043-8c22-4041-a178-266148b8368e] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "coredns-7db6d8ff4d-dl8r2" [39350017-f3e1-44ea-a786-c03ee7a0fd8e] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "etcd-ha-368100" [a8a99351-89b1-4e87-a251-e8735df617cc] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "etcd-ha-368100-m02" [fa26841e-b79d-483a-b723-3654fde31626] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "etcd-ha-368100-m03" [e26b99db-b727-47e4-9aa8-7cd2f1a58454] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "kindnet-g66bp" [aeebb510-5026-4062-95d8-be966524f934] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "kindnet-n6fxd" [327dd296-b02d-4784-a971-80cee701dee0] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kindnet-qk4fv" [3687f8c4-d986-4023-a2ad-98aa6d4ddd15] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-apiserver-ha-368100" [60620b18-7050-463c-b761-9d89caea2869] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-apiserver-ha-368100-m02" [b0105503-1e6b-4d83-a2ff-c921f7916ceb] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-apiserver-ha-368100-m03" [4d3f6596-2d88-46bc-8ca1-6115e3f60dca] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-controller-manager-ha-368100" [a1e4d3d6-ff46-4f52-b5ff-fdad20389b34] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-controller-manager-ha-368100-m02" [18ffec1a-6bb3-4236-98f4-88e03d83516b] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-controller-manager-ha-368100-m03" [32925a2e-757b-4bbc-8d2d-258212289ae0] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-proxy-2j65l" [dfd9f031-9a9e-46fc-ad2f-b0d61e7d7034] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-proxy-2mwxs" [4ba43598-8c67-43cc-b17a-7d7fbd835edc] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-proxy-pvvwh" [6cc7a9ab-5235-4c3a-8184-be5b4e436320] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-scheduler-ha-368100" [ac6c4d94-e6c2-4e43-b8ea-7819597ff572] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-scheduler-ha-368100-m02" [3d706715-7b39-4a07-ad0d-2e91b0476ac7] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-scheduler-ha-368100-m03" [d0c84f6a-aae9-4c03-9d69-5b2643e0dfc1] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-vip-ha-368100" [fbd9ab1c-c5b6-4b14-b4a7-8da5a58285b4] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-vip-ha-368100-m02" [2e64f8be-5d5f-41ba-b4c8-9f3623e9efc6] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-vip-ha-368100-m03" [0482cc17-ebee-4f8f-a02d-5e39d035f7b4] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "storage-provisioner" [853aab4d-2671-43fd-a221-0966d875b568] Running
	I0610 11:13:37.504568   12440 system_pods.go:126] duration metric: took 226.5318ms to wait for k8s-apps to be running ...
	I0610 11:13:37.504568   12440 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:13:37.518316   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:13:37.546906   12440 system_svc.go:56] duration metric: took 42.3372ms WaitForService to wait for kubelet
	I0610 11:13:37.546906   12440 kubeadm.go:576] duration metric: took 18.1410394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:13:37.546906   12440 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:13:37.680502   12440 request.go:629] Waited for 133.595ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes
	I0610 11:13:37.680687   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes
	I0610 11:13:37.680687   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:37.680687   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:37.680687   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:37.689593   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:13:37.691635   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:13:37.691635   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:13:37.691635   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:13:37.691635   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:13:37.691635   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:13:37.691635   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:13:37.691635   12440 node_conditions.go:105] duration metric: took 144.7282ms to run NodePressure ...
	I0610 11:13:37.691635   12440 start.go:240] waiting for startup goroutines ...
	I0610 11:13:37.691635   12440 start.go:254] writing updated cluster config ...
	I0610 11:13:37.704674   12440 ssh_runner.go:195] Run: rm -f paused
	I0610 11:13:37.865188   12440 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:13:37.868985   12440 out.go:177] * Done! kubectl is now configured to use "ha-368100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 10 11:05:38 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:05:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/792a1f88c34ef3d0443b9041ca9af3b415a7afe07c8bb4b0d44692ef213163f8/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 11:05:39 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:05:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7e1f56d0d8fcd8b456122b36831f2495c9e29317bbb6cc9b665c88d54331aa7/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 11:05:39 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:05:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5401e0e3d499b0543b4e30bc86ebfa14378c65915eb9df177e04f8d5355633fd/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.288825207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.288939812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.288960413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.289398530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.394610130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.394703134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.394738235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.394849139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.463534816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.463789826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.464000134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.464238744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:14:19 ha-368100 dockerd[1332]: time="2024-06-10T11:14:19.292214584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:14:19 ha-368100 dockerd[1332]: time="2024-06-10T11:14:19.292457987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:14:19 ha-368100 dockerd[1332]: time="2024-06-10T11:14:19.292546388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:14:19 ha-368100 dockerd[1332]: time="2024-06-10T11:14:19.292850491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:14:19 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:14:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/933e7b7f774c62b84bd1c6980099a49ce8b12d42f25be8182a33603cb751e0a6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 11:14:20 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:14:20Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 10 11:14:21 ha-368100 dockerd[1332]: time="2024-06-10T11:14:21.121737675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:14:21 ha-368100 dockerd[1332]: time="2024-06-10T11:14:21.121964978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:14:21 ha-368100 dockerd[1332]: time="2024-06-10T11:14:21.122021878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:14:21 ha-368100 dockerd[1332]: time="2024-06-10T11:14:21.122180780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	df85a8c280b4e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   933e7b7f774c6       busybox-fc5497c4f-kff2v
	09cd6b70fc20f       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   5401e0e3d499b       coredns-7db6d8ff4d-dl8r2
	223bd98c3c165       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   d7e1f56d0d8fc       storage-provisioner
	efb3b4096e35d       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   792a1f88c34ef       coredns-7db6d8ff4d-2jsrh
	73444aa5980bc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              9 minutes ago        Running             kindnet-cni               0                   ce4274ec4f374       kindnet-qk4fv
	115b8330d5339       747097150317f                                                                                         9 minutes ago        Running             kube-proxy                0                   9832445ddcc98       kube-proxy-2j65l
	56f42c342b96a       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   1916a970b5e71       kube-vip-ha-368100
	b540b6d71db60       a52dc94f0a912                                                                                         10 minutes ago       Running             kube-scheduler            0                   7be8b7f9270b0       kube-scheduler-ha-368100
	d777e3ce95a04       25a1387cdab82                                                                                         10 minutes ago       Running             kube-controller-manager   0                   5fd6688a8e7bb       kube-controller-manager-ha-368100
	fb70745682bca       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   b644d46a1aae9       etcd-ha-368100
	f08944a38cbb0       91be940803172                                                                                         10 minutes ago       Running             kube-apiserver            0                   5839fc372f844       kube-apiserver-ha-368100
	
	
	==> coredns [09cd6b70fc20] <==
	[INFO] 10.244.1.2:59848 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037401267s
	[INFO] 10.244.1.2:47482 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117601s
	[INFO] 10.244.1.2:44195 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214802s
	[INFO] 10.244.0.4:53862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112701s
	[INFO] 10.244.0.4:50783 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076801s
	[INFO] 10.244.0.4:51910 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232602s
	[INFO] 10.244.0.4:37023 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000246702s
	[INFO] 10.244.0.4:47932 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172502s
	[INFO] 10.244.2.2:34531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248003s
	[INFO] 10.244.2.2:33872 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000072201s
	[INFO] 10.244.2.2:59280 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000474005s
	[INFO] 10.244.2.2:59958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065401s
	[INFO] 10.244.0.4:51073 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114501s
	[INFO] 10.244.2.2:49831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000282202s
	[INFO] 10.244.2.2:54890 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080101s
	[INFO] 10.244.2.2:60475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060901s
	[INFO] 10.244.2.2:55509 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062601s
	[INFO] 10.244.1.2:47076 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119601s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000172402s
	[INFO] 10.244.1.2:50519 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000300503s
	[INFO] 10.244.0.4:46515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178602s
	[INFO] 10.244.0.4:47844 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000323503s
	[INFO] 10.244.2.2:36577 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000322703s
	[INFO] 10.244.2.2:39282 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137601s
	[INFO] 10.244.2.2:56688 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000217803s
	
	
	==> coredns [efb3b4096e35] <==
	[INFO] 10.244.0.4:43477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000285403s
	[INFO] 10.244.0.4:39133 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.040194394s
	[INFO] 10.244.2.2:38597 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148801s
	[INFO] 10.244.2.2:32822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000123601s
	[INFO] 10.244.2.2:49451 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000064601s
	[INFO] 10.244.1.2:45373 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.001235812s
	[INFO] 10.244.1.2:34919 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137001s
	[INFO] 10.244.0.4:39606 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012679324s
	[INFO] 10.244.0.4:44144 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138702s
	[INFO] 10.244.0.4:48550 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122201s
	[INFO] 10.244.2.2:35261 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113401s
	[INFO] 10.244.2.2:57747 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024111336s
	[INFO] 10.244.2.2:53428 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139601s
	[INFO] 10.244.2.2:47173 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000302703s
	[INFO] 10.244.1.2:38112 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195702s
	[INFO] 10.244.1.2:43394 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104001s
	[INFO] 10.244.1.2:33777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114402s
	[INFO] 10.244.1.2:41805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088601s
	[INFO] 10.244.0.4:45442 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251503s
	[INFO] 10.244.0.4:40494 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074601s
	[INFO] 10.244.0.4:49300 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059601s
	[INFO] 10.244.1.2:48668 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122402s
	[INFO] 10.244.0.4:59785 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000239702s
	[INFO] 10.244.0.4:46111 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084601s
	[INFO] 10.244.2.2:60671 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177302s
	
	
	==> describe nodes <==
	Name:               ha-368100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-368100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-368100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T11_05_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:05:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-368100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:15:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:14:41 +0000   Mon, 10 Jun 2024 11:05:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:14:41 +0000   Mon, 10 Jun 2024 11:05:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:14:41 +0000   Mon, 10 Jun 2024 11:05:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:14:41 +0000   Mon, 10 Jun 2024 11:05:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.146.64
	  Hostname:    ha-368100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 106727802fb741c6bff8a0ac9485fce0
	  System UUID:                72c8b920-e217-884f-be80-9e941a2f6edb
	  Boot ID:                    86d99b64-160f-4792-ac83-4a9e72e98c28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kff2v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 coredns-7db6d8ff4d-2jsrh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 coredns-7db6d8ff4d-dl8r2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-ha-368100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-qk4fv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-368100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-368100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-2j65l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-368100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-368100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m58s              kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-368100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-368100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-368100 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node ha-368100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node ha-368100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node ha-368100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-368100 event: Registered Node ha-368100 in Controller
	  Normal  NodeReady                9m48s              kubelet          Node ha-368100 status is now: NodeReady
	  Normal  RegisteredNode           5m54s              node-controller  Node ha-368100 event: Registered Node ha-368100 in Controller
	  Normal  RegisteredNode           113s               node-controller  Node ha-368100 event: Registered Node ha-368100 in Controller
	
	
	Name:               ha-368100-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-368100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-368100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T11_09_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:09:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-368100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:15:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:14:46 +0000   Mon, 10 Jun 2024 11:09:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:14:46 +0000   Mon, 10 Jun 2024 11:09:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:14:46 +0000   Mon, 10 Jun 2024 11:09:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:14:46 +0000   Mon, 10 Jun 2024 11:09:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.157.100
	  Hostname:    ha-368100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcd1e300f599473fa217cdf3004cc672
	  System UUID:                0564af0b-b479-c54b-840a-d86e879c7ca4
	  Boot ID:                    44f16c22-bd3d-4bbd-9872-695e8d9773fb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9tfq9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 etcd-ha-368100-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m16s
	  kube-system                 kindnet-g66bp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m17s
	  kube-system                 kube-apiserver-ha-368100-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-controller-manager-ha-368100-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-proxy-2mwxs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-scheduler-ha-368100-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-vip-ha-368100-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m11s                  kube-proxy       
	  Normal  Starting                 6m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m17s (x2 over 6m17s)  kubelet          Node ha-368100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x2 over 6m17s)  kubelet          Node ha-368100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s (x2 over 6m17s)  kubelet          Node ha-368100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-368100-m02 event: Registered Node ha-368100-m02 in Controller
	  Normal  NodeReady                6m4s                   kubelet          Node ha-368100-m02 status is now: NodeReady
	  Normal  RegisteredNode           5m54s                  node-controller  Node ha-368100-m02 event: Registered Node ha-368100-m02 in Controller
	  Normal  RegisteredNode           113s                   node-controller  Node ha-368100-m02 event: Registered Node ha-368100-m02 in Controller
	
	
	Name:               ha-368100-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-368100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-368100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T11_13_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:13:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-368100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:15:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:14:42 +0000   Mon, 10 Jun 2024 11:13:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:14:42 +0000   Mon, 10 Jun 2024 11:13:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:14:42 +0000   Mon, 10 Jun 2024 11:13:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:14:42 +0000   Mon, 10 Jun 2024 11:13:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.144.162
	  Hostname:    ha-368100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f366647d884480da181af8caa28b5d5
	  System UUID:                c1ffacdd-d11a-8444-b3ec-cc3e820687e2
	  Boot ID:                    86b69237-2b2c-449c-8f74-93780012c7ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s49nb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 etcd-ha-368100-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m12s
	  kube-system                 kindnet-n6fxd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m16s
	  kube-system                 kube-apiserver-ha-368100-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-controller-manager-ha-368100-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-proxy-pvvwh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-scheduler-ha-368100-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-vip-ha-368100-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node ha-368100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node ha-368100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x7 over 2m16s)  kubelet          Node ha-368100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m14s                  node-controller  Node ha-368100-m03 event: Registered Node ha-368100-m03 in Controller
	  Normal  RegisteredNode           2m12s                  node-controller  Node ha-368100-m03 event: Registered Node ha-368100-m03 in Controller
	  Normal  RegisteredNode           113s                   node-controller  Node ha-368100-m03 event: Registered Node ha-368100-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000175] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun10 11:04] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.191196] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +32.178537] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.113388] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.570574] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.202902] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.240161] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +2.856344] systemd-fstab-generator[1180]: Ignoring "noauto" option for root device
	[  +0.210854] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.231849] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.306923] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[ +11.826773] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.119542] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.693035] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[Jun10 11:05] systemd-fstab-generator[1726]: Ignoring "noauto" option for root device
	[  +0.092283] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.711689] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.333377] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[ +17.576528] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.861142] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.227167] kauditd_printk_skb: 14 callbacks suppressed
	[Jun10 11:09] kauditd_printk_skb: 9 callbacks suppressed
	[Jun10 11:12] hrtimer: interrupt took 17869246 ns
	
	
	==> etcd [fb70745682bc] <==
	{"level":"info","ts":"2024-06-10T11:13:14.032094Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"4de5a44339f79dbf","remote-peer-id":"aa994ec211af834f"}
	{"level":"info","ts":"2024-06-10T11:13:14.051899Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"4de5a44339f79dbf","remote-peer-id":"aa994ec211af834f"}
	{"level":"info","ts":"2024-06-10T11:13:14.052262Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4de5a44339f79dbf","remote-peer-id":"aa994ec211af834f"}
	{"level":"warn","ts":"2024-06-10T11:13:14.158395Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"aa994ec211af834f","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-10T11:13:15.158895Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"aa994ec211af834f","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-10T11:13:16.1594Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"aa994ec211af834f","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-10T11:13:16.2488Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"aa994ec211af834f","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"43.772205ms"}
	{"level":"warn","ts":"2024-06-10T11:13:16.248848Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"6b3b55d3cf2fe4e4","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"43.899506ms"}
	{"level":"info","ts":"2024-06-10T11:13:16.250114Z","caller":"traceutil/trace.go:171","msg":"trace[1837544970] transaction","detail":"{read_only:false; response_revision:1569; number_of_response:1; }","duration":"232.119109ms","start":"2024-06-10T11:13:16.017958Z","end":"2024-06-10T11:13:16.250078Z","steps":["trace[1837544970] 'process raft request'  (duration: 231.799406ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:13:16.295009Z","caller":"traceutil/trace.go:171","msg":"trace[1340700638] linearizableReadLoop","detail":"{readStateIndex:1747; appliedIndex:1748; }","duration":"245.829521ms","start":"2024-06-10T11:13:16.049157Z","end":"2024-06-10T11:13:16.294987Z","steps":["trace[1340700638] 'read index received'  (duration: 245.822221ms)","trace[1340700638] 'applied index is now lower than readState.Index'  (duration: 6.1µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T11:13:16.295395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.260025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-368100-m03\" ","response":"range_response_count:1 size:3375"}
	{"level":"info","ts":"2024-06-10T11:13:16.295453Z","caller":"traceutil/trace.go:171","msg":"trace[18638520] range","detail":"{range_begin:/registry/minions/ha-368100-m03; range_end:; response_count:1; response_revision:1569; }","duration":"246.427027ms","start":"2024-06-10T11:13:16.049013Z","end":"2024-06-10T11:13:16.29544Z","steps":["trace[18638520] 'agreement among raft nodes before linearized reading'  (duration: 246.132624ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:13:17.667671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4de5a44339f79dbf switched to configuration voters=(5613073119229484479 7726863953886700772 12292943253311816527)"}
	{"level":"info","ts":"2024-06-10T11:13:17.667807Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"272211f56e03cfd","local-member-id":"4de5a44339f79dbf"}
	{"level":"info","ts":"2024-06-10T11:13:17.674958Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"4de5a44339f79dbf","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"aa994ec211af834f"}
	{"level":"warn","ts":"2024-06-10T11:13:19.309523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.629802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-368100-m03\" ","response":"range_response_count:1 size:4140"}
	{"level":"info","ts":"2024-06-10T11:13:19.309585Z","caller":"traceutil/trace.go:171","msg":"trace[1454379059] range","detail":"{range_begin:/registry/minions/ha-368100-m03; range_end:; response_count:1; response_revision:1581; }","duration":"110.188506ms","start":"2024-06-10T11:13:19.199383Z","end":"2024-06-10T11:13:19.309572Z","steps":["trace[1454379059] 'range keys from in-memory index tree'  (duration: 107.541085ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:13:24.269203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.312718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-368100-m03\" ","response":"range_response_count:1 size:4443"}
	{"level":"warn","ts":"2024-06-10T11:13:24.269262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"357.678647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-06-10T11:13:24.269263Z","caller":"traceutil/trace.go:171","msg":"trace[594349483] range","detail":"{range_begin:/registry/minions/ha-368100-m03; range_end:; response_count:1; response_revision:1600; }","duration":"281.410919ms","start":"2024-06-10T11:13:23.987838Z","end":"2024-06-10T11:13:24.269249Z","steps":["trace[594349483] 'range keys from in-memory index tree'  (duration: 279.371701ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:13:24.269304Z","caller":"traceutil/trace.go:171","msg":"trace[1109411643] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1600; }","duration":"357.753347ms","start":"2024-06-10T11:13:23.91154Z","end":"2024-06-10T11:13:24.269293Z","steps":["trace[1109411643] 'range keys from in-memory index tree'  (duration: 356.605538ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:13:24.269731Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T11:13:23.911527Z","time spent":"358.19285ms","remote":"127.0.0.1:51130","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":457,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"info","ts":"2024-06-10T11:15:03.765109Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1066}
	{"level":"info","ts":"2024-06-10T11:15:03.876117Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1066,"took":"110.615076ms","hash":98576834,"current-db-size-bytes":3596288,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-06-10T11:15:03.876241Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":98576834,"revision":1066,"compact-revision":-1}
	
	
	==> kernel <==
	 11:15:27 up 12 min,  0 users,  load average: 0.87, 0.57, 0.33
	Linux ha-368100 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [73444aa5980b] <==
	I0610 11:14:45.723122       1 main.go:250] Node ha-368100-m03 has CIDR [10.244.2.0/24] 
	I0610 11:14:55.736653       1 main.go:223] Handling node with IPs: map[172.17.146.64:{}]
	I0610 11:14:55.737058       1 main.go:227] handling current node
	I0610 11:14:55.737238       1 main.go:223] Handling node with IPs: map[172.17.157.100:{}]
	I0610 11:14:55.737786       1 main.go:250] Node ha-368100-m02 has CIDR [10.244.1.0/24] 
	I0610 11:14:55.737981       1 main.go:223] Handling node with IPs: map[172.17.144.162:{}]
	I0610 11:14:55.738019       1 main.go:250] Node ha-368100-m03 has CIDR [10.244.2.0/24] 
	I0610 11:15:05.747069       1 main.go:223] Handling node with IPs: map[172.17.146.64:{}]
	I0610 11:15:05.747117       1 main.go:227] handling current node
	I0610 11:15:05.747149       1 main.go:223] Handling node with IPs: map[172.17.157.100:{}]
	I0610 11:15:05.747156       1 main.go:250] Node ha-368100-m02 has CIDR [10.244.1.0/24] 
	I0610 11:15:05.747990       1 main.go:223] Handling node with IPs: map[172.17.144.162:{}]
	I0610 11:15:05.748081       1 main.go:250] Node ha-368100-m03 has CIDR [10.244.2.0/24] 
	I0610 11:15:15.760032       1 main.go:223] Handling node with IPs: map[172.17.146.64:{}]
	I0610 11:15:15.760093       1 main.go:227] handling current node
	I0610 11:15:15.760107       1 main.go:223] Handling node with IPs: map[172.17.157.100:{}]
	I0610 11:15:15.760113       1 main.go:250] Node ha-368100-m02 has CIDR [10.244.1.0/24] 
	I0610 11:15:15.760747       1 main.go:223] Handling node with IPs: map[172.17.144.162:{}]
	I0610 11:15:15.760935       1 main.go:250] Node ha-368100-m03 has CIDR [10.244.2.0/24] 
	I0610 11:15:25.777205       1 main.go:223] Handling node with IPs: map[172.17.146.64:{}]
	I0610 11:15:25.777237       1 main.go:227] handling current node
	I0610 11:15:25.777250       1 main.go:223] Handling node with IPs: map[172.17.157.100:{}]
	I0610 11:15:25.777258       1 main.go:250] Node ha-368100-m02 has CIDR [10.244.1.0/24] 
	I0610 11:15:25.778610       1 main.go:223] Handling node with IPs: map[172.17.144.162:{}]
	I0610 11:15:25.778671       1 main.go:250] Node ha-368100-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [f08944a38cbb] <==
	I0610 11:05:10.063630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 11:05:10.118592       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0610 11:05:10.149206       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 11:05:24.095000       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0610 11:05:24.761969       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0610 11:13:11.393204       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.6µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0610 11:13:11.401485       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0610 11:13:11.401653       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0610 11:13:11.410924       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0610 11:13:11.411659       1 timeout.go:142] post-timeout activity - time-elapsed: 22.994289ms, PATCH "/api/v1/namespaces/default/events/ha-368100-m03.17d7a04286a8c112" result: <nil>
	E0610 11:14:24.513781       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61725: use of closed network connection
	E0610 11:14:26.221526       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61727: use of closed network connection
	E0610 11:14:26.795278       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61729: use of closed network connection
	E0610 11:14:27.446805       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61731: use of closed network connection
	E0610 11:14:28.022660       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61733: use of closed network connection
	E0610 11:14:28.630613       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61735: use of closed network connection
	E0610 11:14:29.191488       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61737: use of closed network connection
	E0610 11:14:29.763799       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61739: use of closed network connection
	E0610 11:14:30.322607       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61741: use of closed network connection
	E0610 11:14:31.367571       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61744: use of closed network connection
	E0610 11:14:41.912609       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61746: use of closed network connection
	E0610 11:14:42.481822       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61749: use of closed network connection
	E0610 11:14:53.050075       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61751: use of closed network connection
	E0610 11:14:53.594397       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61754: use of closed network connection
	E0610 11:15:04.145164       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61756: use of closed network connection
	
	
	==> kube-controller-manager [d777e3ce95a0] <==
	I0610 11:05:39.044000       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 11:05:40.705220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="33.812679ms"
	I0610 11:05:40.732437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.787283ms"
	I0610 11:05:40.733191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.3µs"
	I0610 11:09:09.887829       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-368100-m02\" does not exist"
	I0610 11:09:09.906077       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-368100-m02" podCIDRs=["10.244.1.0/24"]
	I0610 11:09:14.090995       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-368100-m02"
	I0610 11:13:10.561923       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-368100-m03\" does not exist"
	I0610 11:13:10.615274       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-368100-m03" podCIDRs=["10.244.2.0/24"]
	I0610 11:13:14.259195       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-368100-m03"
	I0610 11:14:18.367411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="235.433656ms"
	I0610 11:14:18.653039       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="283.895161ms"
	I0610 11:14:18.829591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="176.462341ms"
	I0610 11:14:18.869124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.35541ms"
	I0610 11:14:18.869436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="131.602µs"
	I0610 11:14:19.037730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.735535ms"
	I0610 11:14:19.038609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.601µs"
	I0610 11:14:19.870150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="198.702µs"
	I0610 11:14:20.257904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.7µs"
	I0610 11:14:21.694738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="123.833615ms"
	E0610 11:14:21.694853       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0610 11:14:21.695207       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="306.403µs"
	I0610 11:14:21.700984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.4µs"
	I0610 11:14:21.745901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.413789ms"
	I0610 11:14:21.747423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.2µs"
	
	
	==> kube-proxy [115b8330d533] <==
	I0610 11:05:27.986657       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:05:28.031278       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.146.64"]
	I0610 11:05:28.111180       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:05:28.111377       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:05:28.111412       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:05:28.115481       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:05:28.116168       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:05:28.116823       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:05:28.120557       1 config.go:192] "Starting service config controller"
	I0610 11:05:28.121679       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:05:28.121788       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:05:28.121930       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:05:28.126749       1 config.go:319] "Starting node config controller"
	I0610 11:05:28.127161       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:05:28.222294       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:05:28.222375       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:05:28.228406       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b540b6d71db6] <==
	E0610 11:05:07.921274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 11:05:07.925666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 11:05:07.926087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 11:05:07.986822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 11:05:07.987136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 11:05:08.137373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 11:05:08.137484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 11:05:08.139816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 11:05:08.139869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 11:05:08.149386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 11:05:08.150025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 11:05:09.628726       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 11:14:18.236942       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="e47805c5-7a7b-4b89-9d16-10d91abbec83" pod="default/busybox-fc5497c4f-9tfq9" assumedNode="ha-368100-m02" currentNode="ha-368100-m03"
	E0610 11:14:18.282383       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-9tfq9\": pod busybox-fc5497c4f-9tfq9 is already assigned to node \"ha-368100-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-9tfq9" node="ha-368100-m03"
	E0610 11:14:18.286287       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e47805c5-7a7b-4b89-9d16-10d91abbec83(default/busybox-fc5497c4f-9tfq9) was assumed on ha-368100-m03 but assigned to ha-368100-m02" pod="default/busybox-fc5497c4f-9tfq9"
	E0610 11:14:18.287934       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-9tfq9\": pod busybox-fc5497c4f-9tfq9 is already assigned to node \"ha-368100-m02\"" pod="default/busybox-fc5497c4f-9tfq9"
	I0610 11:14:18.288192       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-9tfq9" node="ha-368100-m02"
	E0610 11:14:18.378597       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s49nb\": pod busybox-fc5497c4f-s49nb is already assigned to node \"ha-368100-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-s49nb" node="ha-368100-m03"
	E0610 11:14:18.378674       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4ad912e0-e757-4368-99a7-6687d9687526(default/busybox-fc5497c4f-s49nb) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-s49nb"
	E0610 11:14:18.379366       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s49nb\": pod busybox-fc5497c4f-s49nb is already assigned to node \"ha-368100-m03\"" pod="default/busybox-fc5497c4f-s49nb"
	I0610 11:14:18.379392       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-s49nb" node="ha-368100-m03"
	E0610 11:14:18.419494       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kff2v\": pod busybox-fc5497c4f-kff2v is already assigned to node \"ha-368100\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kff2v" node="ha-368100"
	E0610 11:14:18.420731       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod beea6f51-8d7f-45a8-a021-48301c4e9268(default/busybox-fc5497c4f-kff2v) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kff2v"
	E0610 11:14:18.422553       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kff2v\": pod busybox-fc5497c4f-kff2v is already assigned to node \"ha-368100\"" pod="default/busybox-fc5497c4f-kff2v"
	I0610 11:14:18.422680       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kff2v" node="ha-368100"
	
	
	==> kubelet <==
	Jun 10 11:11:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:11:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:12:10 ha-368100 kubelet[2217]: E0610 11:12:10.245827    2217 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:12:10 ha-368100 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:12:10 ha-368100 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:12:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:12:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:13:10 ha-368100 kubelet[2217]: E0610 11:13:10.242243    2217 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:13:10 ha-368100 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:13:10 ha-368100 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:13:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:13:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:14:10 ha-368100 kubelet[2217]: E0610 11:14:10.242760    2217 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:14:10 ha-368100 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:14:10 ha-368100 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:14:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:14:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:14:18 ha-368100 kubelet[2217]: I0610 11:14:18.305150    2217 topology_manager.go:215] "Topology Admit Handler" podUID="beea6f51-8d7f-45a8-a021-48301c4e9268" podNamespace="default" podName="busybox-fc5497c4f-kff2v"
	Jun 10 11:14:18 ha-368100 kubelet[2217]: I0610 11:14:18.448546    2217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skgh2\" (UniqueName: \"kubernetes.io/projected/beea6f51-8d7f-45a8-a021-48301c4e9268-kube-api-access-skgh2\") pod \"busybox-fc5497c4f-kff2v\" (UID: \"beea6f51-8d7f-45a8-a021-48301c4e9268\") " pod="default/busybox-fc5497c4f-kff2v"
	Jun 10 11:14:19 ha-368100 kubelet[2217]: I0610 11:14:19.528047    2217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="933e7b7f774c62b84bd1c6980099a49ce8b12d42f25be8182a33603cb751e0a6"
	Jun 10 11:15:10 ha-368100 kubelet[2217]: E0610 11:15:10.244728    2217 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:15:10 ha-368100 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:15:10 ha-368100 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:15:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:15:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 11:15:17.809760    8644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-368100 -n ha-368100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-368100 -n ha-368100: (13.4615662s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-368100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (72.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (707.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 status --output json -v=7 --alsologtostderr: (52.5724387s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp testdata\cp-test.txt ha-368100:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp testdata\cp-test.txt ha-368100:/home/docker/cp-test.txt: (10.4978027s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt": (10.3618005s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100.txt: (10.5700807s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt": (10.4648466s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100:/home/docker/cp-test.txt ha-368100-m02:/home/docker/cp-test_ha-368100_ha-368100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100:/home/docker/cp-test.txt ha-368100-m02:/home/docker/cp-test_ha-368100_ha-368100-m02.txt: (18.0528339s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt": (10.5515364s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test_ha-368100_ha-368100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test_ha-368100_ha-368100-m02.txt": (10.3610258s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100:/home/docker/cp-test.txt ha-368100-m03:/home/docker/cp-test_ha-368100_ha-368100-m03.txt
E0610 11:23:17.569147    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100:/home/docker/cp-test.txt ha-368100-m03:/home/docker/cp-test_ha-368100_ha-368100-m03.txt: (18.3486364s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt": (10.505603s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test_ha-368100_ha-368100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test_ha-368100_ha-368100-m03.txt": (10.5175002s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100:/home/docker/cp-test.txt ha-368100-m04:/home/docker/cp-test_ha-368100_ha-368100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100:/home/docker/cp-test.txt ha-368100-m04:/home/docker/cp-test_ha-368100_ha-368100-m04.txt: (18.3280525s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test.txt": (10.5071242s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test_ha-368100_ha-368100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test_ha-368100_ha-368100-m04.txt": (10.5196136s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp testdata\cp-test.txt ha-368100-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp testdata\cp-test.txt ha-368100-m02:/home/docker/cp-test.txt: (10.4998467s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt": (10.4141149s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100-m02.txt
E0610 11:24:41.857052    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100-m02.txt: (10.4456714s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt": (10.3037864s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m02:/home/docker/cp-test.txt ha-368100:/home/docker/cp-test_ha-368100-m02_ha-368100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m02:/home/docker/cp-test.txt ha-368100:/home/docker/cp-test_ha-368100-m02_ha-368100.txt: (18.1132108s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt": (10.4770852s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test_ha-368100-m02_ha-368100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test_ha-368100-m02_ha-368100.txt": (10.4576531s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m02:/home/docker/cp-test.txt ha-368100-m03:/home/docker/cp-test_ha-368100-m02_ha-368100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m02:/home/docker/cp-test.txt ha-368100-m03:/home/docker/cp-test_ha-368100-m02_ha-368100-m03.txt: (18.0960159s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt": (10.3922736s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test_ha-368100-m02_ha-368100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test_ha-368100-m02_ha-368100-m03.txt": (10.4508249s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m02:/home/docker/cp-test.txt ha-368100-m04:/home/docker/cp-test_ha-368100-m02_ha-368100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m02:/home/docker/cp-test.txt ha-368100-m04:/home/docker/cp-test_ha-368100-m02_ha-368100-m04.txt: (18.1909872s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test.txt": (10.5209843s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test_ha-368100-m02_ha-368100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test_ha-368100-m02_ha-368100-m04.txt": (10.5276885s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp testdata\cp-test.txt ha-368100-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp testdata\cp-test.txt ha-368100-m03:/home/docker/cp-test.txt: (10.4640447s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt": (10.5029325s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100-m03.txt: (10.5086358s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt": (10.5918226s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt ha-368100:/home/docker/cp-test_ha-368100-m03_ha-368100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt ha-368100:/home/docker/cp-test_ha-368100-m03_ha-368100.txt: (18.4069774s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt": (10.5177483s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test_ha-368100-m03_ha-368100.txt"
E0610 11:28:17.565993    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test_ha-368100-m03_ha-368100.txt": (10.4553876s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt ha-368100-m02:/home/docker/cp-test_ha-368100-m03_ha-368100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt ha-368100-m02:/home/docker/cp-test_ha-368100-m03_ha-368100-m02.txt: (18.3635059s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt": (10.4603543s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test_ha-368100-m03_ha-368100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test_ha-368100-m03_ha-368100-m02.txt": (10.3930231s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt ha-368100-m04:/home/docker/cp-test_ha-368100-m03_ha-368100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt ha-368100-m04:/home/docker/cp-test_ha-368100-m03_ha-368100-m04.txt: (18.1888292s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt"
E0610 11:29:25.073581    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test.txt": (10.5696178s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test_ha-368100-m03_ha-368100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test_ha-368100-m03_ha-368100-m04.txt": (10.5249039s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp testdata\cp-test.txt ha-368100-m04:/home/docker/cp-test.txt
E0610 11:29:41.851640    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp testdata\cp-test.txt ha-368100-m04:/home/docker/cp-test.txt: (10.587683s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt": (10.638651s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100-m04.txt: (10.4570494s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt": (10.416289s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt ha-368100:/home/docker/cp-test_ha-368100-m04_ha-368100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt ha-368100:/home/docker/cp-test_ha-368100-m04_ha-368100.txt: (18.3133737s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt": (10.4972906s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test_ha-368100-m04_ha-368100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100 "sudo cat /home/docker/cp-test_ha-368100-m04_ha-368100.txt": (10.4170932s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt ha-368100-m02:/home/docker/cp-test_ha-368100-m04_ha-368100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt ha-368100-m02:/home/docker/cp-test_ha-368100-m04_ha-368100-m02.txt: (18.2279468s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt": (10.5633766s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test_ha-368100-m04_ha-368100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m02 "sudo cat /home/docker/cp-test_ha-368100-m04_ha-368100-m02.txt": (10.4774464s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt ha-368100-m03:/home/docker/cp-test_ha-368100-m04_ha-368100-m03.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt ha-368100-m03:/home/docker/cp-test_ha-368100-m04_ha-368100-m03.txt: exit status 1 (16.9371494s)

                                                
                                                
** stderr ** 
	W0610 11:31:40.033512    8296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt ha-368100-m03:/home/docker/cp-test_ha-368100-m04_ha-368100-m03.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt ha-368100-m03:/home/docker/cp-test_ha-368100-m04_ha-368100-m03.txt" : exit status 1
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test_ha-368100-m04_ha-368100-m03.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test_ha-368100-m04_ha-368100-m03.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 "sudo cat /home/docker/cp-test_ha-368100-m04_ha-368100-m03.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-368100 ssh -n ha-368100-m03 \"sudo cat /home/docker/cp-test_ha-368100-m04_ha-368100-m03.txt\"" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-368100 -n ha-368100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-368100 -n ha-368100: (13.4825018s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 logs -n 25: (9.5820307s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-368100 ssh -n ha-368100-m04 sudo cat                                                                                  | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:26 UTC | 10 Jun 24 11:26 UTC |
	|         | /home/docker/cp-test_ha-368100-m02_ha-368100-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-368100 cp testdata\cp-test.txt                                                                                        | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:26 UTC | 10 Jun 24 11:27 UTC |
	|         | ha-368100-m03:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n                                                                                                         | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:27 UTC | 10 Jun 24 11:27 UTC |
	|         | ha-368100-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:27 UTC | 10 Jun 24 11:27 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n                                                                                                         | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:27 UTC | 10 Jun 24 11:27 UTC |
	|         | ha-368100-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:27 UTC | 10 Jun 24 11:27 UTC |
	|         | ha-368100:/home/docker/cp-test_ha-368100-m03_ha-368100.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n                                                                                                         | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:28 UTC | 10 Jun 24 11:28 UTC |
	|         | ha-368100-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n ha-368100 sudo cat                                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:28 UTC | 10 Jun 24 11:28 UTC |
	|         | /home/docker/cp-test_ha-368100-m03_ha-368100.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:28 UTC | 10 Jun 24 11:28 UTC |
	|         | ha-368100-m02:/home/docker/cp-test_ha-368100-m03_ha-368100-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n                                                                                                         | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:28 UTC | 10 Jun 24 11:28 UTC |
	|         | ha-368100-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n ha-368100-m02 sudo cat                                                                                  | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:28 UTC | 10 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_ha-368100-m03_ha-368100-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-368100 cp ha-368100-m03:/home/docker/cp-test.txt                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:29 UTC | 10 Jun 24 11:29 UTC |
	|         | ha-368100-m04:/home/docker/cp-test_ha-368100-m03_ha-368100-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n                                                                                                         | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:29 UTC | 10 Jun 24 11:29 UTC |
	|         | ha-368100-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n ha-368100-m04 sudo cat                                                                                  | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:29 UTC | 10 Jun 24 11:29 UTC |
	|         | /home/docker/cp-test_ha-368100-m03_ha-368100-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-368100 cp testdata\cp-test.txt                                                                                        | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:29 UTC | 10 Jun 24 11:29 UTC |
	|         | ha-368100-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n                                                                                                         | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:29 UTC | 10 Jun 24 11:30 UTC |
	|         | ha-368100-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:30 UTC | 10 Jun 24 11:30 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile372301559\001\cp-test_ha-368100-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n                                                                                                         | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:30 UTC | 10 Jun 24 11:30 UTC |
	|         | ha-368100-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:30 UTC | 10 Jun 24 11:30 UTC |
	|         | ha-368100:/home/docker/cp-test_ha-368100-m04_ha-368100.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n                                                                                                         | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:30 UTC | 10 Jun 24 11:30 UTC |
	|         | ha-368100-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n ha-368100 sudo cat                                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:30 UTC | 10 Jun 24 11:31 UTC |
	|         | /home/docker/cp-test_ha-368100-m04_ha-368100.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:31 UTC | 10 Jun 24 11:31 UTC |
	|         | ha-368100-m02:/home/docker/cp-test_ha-368100-m04_ha-368100-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n                                                                                                         | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:31 UTC | 10 Jun 24 11:31 UTC |
	|         | ha-368100-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-368100 ssh -n ha-368100-m02 sudo cat                                                                                  | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:31 UTC | 10 Jun 24 11:31 UTC |
	|         | /home/docker/cp-test_ha-368100-m04_ha-368100-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-368100 cp ha-368100-m04:/home/docker/cp-test.txt                                                                      | ha-368100 | minikube6\jenkins | v1.33.1 | 10 Jun 24 11:31 UTC |                     |
	|         | ha-368100-m03:/home/docker/cp-test_ha-368100-m04_ha-368100-m03.txt                                                       |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:01:57
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:01:57.021959   12440 out.go:291] Setting OutFile to fd 968 ...
	I0610 11:01:57.022986   12440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:01:57.022986   12440 out.go:304] Setting ErrFile to fd 944...
	I0610 11:01:57.022986   12440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:01:57.049032   12440 out.go:298] Setting JSON to false
	I0610 11:01:57.053939   12440 start.go:129] hostinfo: {"hostname":"minikube6","uptime":17205,"bootTime":1718000111,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 11:01:57.054488   12440 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 11:01:57.062945   12440 out.go:177] * [ha-368100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 11:01:57.063284   12440 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:01:57.063284   12440 notify.go:220] Checking for updates...
	I0610 11:01:57.071787   12440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:01:57.074586   12440 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 11:01:57.076886   12440 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:01:57.079532   12440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:01:57.081422   12440 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:02:02.534821   12440 out.go:177] * Using the hyperv driver based on user configuration
	I0610 11:02:02.538941   12440 start.go:297] selected driver: hyperv
	I0610 11:02:02.538989   12440 start.go:901] validating driver "hyperv" against <nil>
	I0610 11:02:02.538989   12440 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:02:02.590314   12440 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 11:02:02.592943   12440 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:02:02.593039   12440 cni.go:84] Creating CNI manager for ""
	I0610 11:02:02.593039   12440 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 11:02:02.593039   12440 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 11:02:02.593406   12440 start.go:340] cluster config:
	{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:02:02.593807   12440 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:02:02.599362   12440 out.go:177] * Starting "ha-368100" primary control-plane node in "ha-368100" cluster
	I0610 11:02:02.602436   12440 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 11:02:02.602436   12440 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 11:02:02.602436   12440 cache.go:56] Caching tarball of preloaded images
	I0610 11:02:02.603221   12440 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 11:02:02.603599   12440 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 11:02:02.603815   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:02:02.604447   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json: {Name:mk3ae4ba2ecba2ca11cb354f04b2c0d5351cff57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:02:02.605393   12440 start.go:360] acquireMachinesLock for ha-368100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:02:02.605393   12440 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-368100"
	I0610 11:02:02.605775   12440 start.go:93] Provisioning new machine with config: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:02:02.605775   12440 start.go:125] createHost starting for "" (driver="hyperv")
	I0610 11:02:02.606293   12440 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 11:02:02.609110   12440 start.go:159] libmachine.API.Create for "ha-368100" (driver="hyperv")
	I0610 11:02:02.609110   12440 client.go:168] LocalClient.Create starting
	I0610 11:02:02.609452   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 11:02:02.609998   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:02:02.609998   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:02:02.610172   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 11:02:02.610172   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:02:02.610172   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:02:02.610172   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 11:02:04.687722   12440 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 11:02:04.687722   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:04.687722   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 11:02:06.469298   12440 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 11:02:06.469298   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:06.469390   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:02:08.001965   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:02:08.010256   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:08.010256   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:02:11.717132   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:02:11.728794   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:11.731656   12440 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 11:02:12.240927   12440 main.go:141] libmachine: Creating SSH key...
	I0610 11:02:12.582680   12440 main.go:141] libmachine: Creating VM...
	I0610 11:02:12.582680   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:02:15.489701   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:02:15.489701   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:15.489701   12440 main.go:141] libmachine: Using switch "Default Switch"
	I0610 11:02:15.489701   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:02:17.245811   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:02:17.257183   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:17.257183   12440 main.go:141] libmachine: Creating VHD
	I0610 11:02:17.257183   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 11:02:21.079461   12440 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A16EA961-09AB-4873-A890-7E3ACDEEE574
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 11:02:21.090839   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:21.090839   12440 main.go:141] libmachine: Writing magic tar header
	I0610 11:02:21.090839   12440 main.go:141] libmachine: Writing SSH key tar header
	I0610 11:02:21.100255   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 11:02:24.317970   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:24.329971   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:24.329971   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\disk.vhd' -SizeBytes 20000MB
	I0610 11:02:26.954132   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:26.954132   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:26.965785   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-368100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 11:02:30.700381   12440 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-368100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 11:02:30.713337   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:30.713337   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-368100 -DynamicMemoryEnabled $false
	I0610 11:02:33.038991   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:33.050735   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:33.050735   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-368100 -Count 2
	I0610 11:02:35.296368   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:35.296368   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:35.296368   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-368100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\boot2docker.iso'
	I0610 11:02:38.015221   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:38.015221   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:38.015221   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-368100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\disk.vhd'
	I0610 11:02:40.749705   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:40.760916   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:40.760916   12440 main.go:141] libmachine: Starting VM...
	I0610 11:02:40.760916   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-368100
	I0610 11:02:44.000260   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:44.000260   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:44.000260   12440 main.go:141] libmachine: Waiting for host to start...
	I0610 11:02:44.000260   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:02:46.375753   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:02:46.376423   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:46.376649   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:02:49.020133   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:49.020133   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:50.031339   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:02:52.334705   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:02:52.334705   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:52.334705   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:02:54.941801   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:02:54.950698   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:55.964880   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:02:58.276903   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:02:58.276903   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:02:58.276903   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:00.892760   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:03:00.896525   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:01.902514   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:04.215348   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:04.219344   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:04.219416   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:06.818278   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:03:06.818278   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:07.823340   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:10.153460   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:10.153460   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:10.168780   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:12.915947   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:12.927740   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:12.927740   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:15.181510   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:15.192990   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:15.193051   12440 machine.go:94] provisionDockerMachine start ...
	I0610 11:03:15.193277   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:17.457073   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:17.468251   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:17.468251   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:20.201817   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:20.201817   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:20.207098   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:20.219301   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:20.219301   12440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:03:20.356174   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:03:20.356174   12440 buildroot.go:166] provisioning hostname "ha-368100"
	I0610 11:03:20.356262   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:22.601484   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:22.601484   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:22.605562   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:25.262174   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:25.273994   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:25.279094   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:25.279793   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:25.279793   12440 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-368100 && echo "ha-368100" | sudo tee /etc/hostname
	I0610 11:03:25.432011   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-368100
	
	I0610 11:03:25.432011   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:27.614482   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:27.614482   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:27.614482   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:30.248842   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:30.248842   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:30.255173   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:30.255972   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:30.255972   12440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-368100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-368100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-368100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:03:30.398701   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:03:30.398701   12440 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 11:03:30.398846   12440 buildroot.go:174] setting up certificates
	I0610 11:03:30.398846   12440 provision.go:84] configureAuth start
	I0610 11:03:30.398846   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:32.624633   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:32.624633   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:32.626333   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:35.444325   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:35.444325   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:35.444325   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:37.788416   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:37.788416   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:37.799957   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:40.491882   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:40.491882   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:40.503675   12440 provision.go:143] copyHostCerts
	I0610 11:03:40.503675   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 11:03:40.504394   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 11:03:40.504482   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 11:03:40.504918   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 11:03:40.505829   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 11:03:40.505829   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 11:03:40.505829   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 11:03:40.506584   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 11:03:40.507620   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 11:03:40.507823   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 11:03:40.507920   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 11:03:40.508312   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 11:03:40.508312   12440 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-368100 san=[127.0.0.1 172.17.146.64 ha-368100 localhost minikube]
	I0610 11:03:40.670397   12440 provision.go:177] copyRemoteCerts
	I0610 11:03:40.680915   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:03:40.680915   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:42.862469   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:42.873191   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:42.873240   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:45.478279   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:45.478279   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:45.478279   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:03:45.587731   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9067748s)
	I0610 11:03:45.587953   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 11:03:45.588476   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:03:45.639338   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 11:03:45.640090   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:03:45.685033   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 11:03:45.685033   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0610 11:03:45.734674   12440 provision.go:87] duration metric: took 15.335702s to configureAuth
	I0610 11:03:45.734674   12440 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:03:45.735356   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:03:45.735356   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:47.889027   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:47.900591   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:47.900591   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:50.550215   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:50.550215   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:50.555302   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:50.556110   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:50.556110   12440 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 11:03:50.694000   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 11:03:50.694061   12440 buildroot.go:70] root file system type: tmpfs
	I0610 11:03:50.694285   12440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 11:03:50.694437   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:52.880057   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:52.891918   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:52.891918   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:03:55.504638   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:03:55.516434   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:55.522104   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:03:55.522964   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:03:55.522964   12440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 11:03:55.673968   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 11:03:55.673968   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:03:57.812688   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:03:57.812688   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:03:57.812688   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:00.389374   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:00.389413   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:00.395008   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:04:00.395008   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:04:00.395008   12440 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 11:04:02.542726   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 11:04:02.542726   12440 machine.go:97] duration metric: took 47.349218s to provisionDockerMachine
	I0610 11:04:02.542726   12440 client.go:171] duration metric: took 1m59.9326308s to LocalClient.Create
	I0610 11:04:02.543261   12440 start.go:167] duration metric: took 1m59.933165s to libmachine.API.Create "ha-368100"
	I0610 11:04:02.543318   12440 start.go:293] postStartSetup for "ha-368100" (driver="hyperv")
	I0610 11:04:02.543318   12440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:04:02.556166   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:04:02.556166   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:04.706640   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:04.717679   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:04.717679   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:07.276432   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:07.287391   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:07.287391   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:04:07.401175   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8449697s)
	I0610 11:04:07.413215   12440 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:04:07.420665   12440 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:04:07.420752   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 11:04:07.421337   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 11:04:07.421878   12440 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 11:04:07.421878   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 11:04:07.432572   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:04:07.453826   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 11:04:07.501143   12440 start.go:296] duration metric: took 4.9577846s for postStartSetup
	I0610 11:04:07.504262   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:09.702534   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:09.714376   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:09.714376   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:12.388616   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:12.388616   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:12.388616   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:04:12.391418   12440 start.go:128] duration metric: took 2m9.7845767s to createHost
	I0610 11:04:12.391418   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:14.581655   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:14.581655   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:14.581655   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:17.151121   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:17.162963   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:17.168650   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:04:17.169181   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:04:17.169181   12440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:04:17.294716   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718017457.306506406
	
	I0610 11:04:17.294802   12440 fix.go:216] guest clock: 1718017457.306506406
	I0610 11:04:17.294802   12440 fix.go:229] Guest: 2024-06-10 11:04:17.306506406 +0000 UTC Remote: 2024-06-10 11:04:12.3914184 +0000 UTC m=+135.536038001 (delta=4.915088006s)
	I0610 11:04:17.294915   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:19.465113   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:19.475914   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:19.475914   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:22.123998   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:22.129189   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:22.134964   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:04:22.135436   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.146.64 22 <nil> <nil>}
	I0610 11:04:22.135501   12440 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718017457
	I0610 11:04:22.285677   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 11:04:17 UTC 2024
	
	I0610 11:04:22.285708   12440 fix.go:236] clock set: Mon Jun 10 11:04:17 UTC 2024
	 (err=<nil>)
	I0610 11:04:22.285708   12440 start.go:83] releasing machines lock for "ha-368100", held for 2m19.6789681s
	I0610 11:04:22.286030   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:24.481728   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:24.492446   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:24.492446   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:27.108099   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:27.110016   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:27.114196   12440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:04:27.114196   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:27.124248   12440 ssh_runner.go:195] Run: cat /version.json
	I0610 11:04:27.124248   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:04:29.432647   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:29.432647   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:29.432647   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:04:29.445163   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:29.445163   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:29.445163   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:04:32.198454   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:32.198454   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:32.211162   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:04:32.225116   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:04:32.225116   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:04:32.226864   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:04:32.306394   12440 ssh_runner.go:235] Completed: cat /version.json: (5.1820414s)
	I0610 11:04:32.319337   12440 ssh_runner.go:195] Run: systemctl --version
	I0610 11:04:32.409137   12440 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2948974s)
	I0610 11:04:32.421251   12440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:04:32.430917   12440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:04:32.444411   12440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:04:32.473173   12440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:04:32.473173   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:04:32.473456   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:04:32.522113   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 11:04:32.563465   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 11:04:32.584014   12440 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 11:04:32.596106   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 11:04:32.627161   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:04:32.656288   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 11:04:32.688611   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:04:32.722100   12440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:04:32.756149   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 11:04:32.788419   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 11:04:32.819205   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 11:04:32.848460   12440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:04:32.883037   12440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:04:32.915036   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:33.126222   12440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 11:04:33.161246   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:04:33.173871   12440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 11:04:33.213912   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:04:33.251251   12440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:04:33.292263   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:04:33.331796   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:04:33.367469   12440 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 11:04:33.443930   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:04:33.468729   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:04:33.514690   12440 ssh_runner.go:195] Run: which cri-dockerd
	I0610 11:04:33.534149   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 11:04:33.551078   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 11:04:33.595733   12440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 11:04:33.797400   12440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 11:04:34.001100   12440 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 11:04:34.001225   12440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 11:04:34.049571   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:34.246973   12440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 11:04:36.798735   12440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5517407s)
	I0610 11:04:36.812035   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 11:04:36.848395   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:04:36.884185   12440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 11:04:37.097548   12440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 11:04:37.326309   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:37.560131   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 11:04:37.609297   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:04:37.645012   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:37.859606   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 11:04:37.982526   12440 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 11:04:37.995760   12440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 11:04:38.012340   12440 start.go:562] Will wait 60s for crictl version
	I0610 11:04:38.027626   12440 ssh_runner.go:195] Run: which crictl
	I0610 11:04:38.051495   12440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:04:38.124781   12440 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 11:04:38.135366   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:04:38.179142   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:04:38.217010   12440 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 11:04:38.217073   12440 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 11:04:38.222178   12440 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 11:04:38.222178   12440 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 11:04:38.222178   12440 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 11:04:38.222178   12440 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 11:04:38.225863   12440 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 11:04:38.225863   12440 ip.go:210] interface addr: 172.17.144.1/20
	I0610 11:04:38.237472   12440 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 11:04:38.240252   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:04:38.277265   12440 kubeadm.go:877] updating cluster {Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:04:38.277854   12440 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 11:04:38.286946   12440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 11:04:38.308539   12440 docker.go:685] Got preloaded images: 
	I0610 11:04:38.308539   12440 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0610 11:04:38.320838   12440 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 11:04:38.352211   12440 ssh_runner.go:195] Run: which lz4
	I0610 11:04:38.358513   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 11:04:38.370904   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 11:04:38.377367   12440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 11:04:38.377367   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0610 11:04:40.698429   12440 docker.go:649] duration metric: took 2.3396203s to copy over tarball
	I0610 11:04:40.711381   12440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 11:04:49.326132   12440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6146327s)
	I0610 11:04:49.326132   12440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 11:04:49.393166   12440 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 11:04:49.411533   12440 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0610 11:04:49.459436   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:49.692196   12440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 11:04:52.830026   12440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1314938s)
	I0610 11:04:52.843590   12440 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 11:04:52.874546   12440 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 11:04:52.874546   12440 cache_images.go:84] Images are preloaded, skipping loading
	I0610 11:04:52.874546   12440 kubeadm.go:928] updating node { 172.17.146.64 8443 v1.30.1 docker true true} ...
	I0610 11:04:52.874546   12440 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-368100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.146.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:04:52.887211   12440 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 11:04:52.930870   12440 cni.go:84] Creating CNI manager for ""
	I0610 11:04:52.930939   12440 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 11:04:52.930939   12440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:04:52.930939   12440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.146.64 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-368100 NodeName:ha-368100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.146.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.146.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 11:04:52.931109   12440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.146.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-368100"
	  kubeletExtraArgs:
	    node-ip: 172.17.146.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.146.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:04:52.931109   12440 kube-vip.go:115] generating kube-vip config ...
	I0610 11:04:52.943867   12440 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 11:04:52.970787   12440 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 11:04:52.975030   12440 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0610 11:04:52.990109   12440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:04:53.008628   12440 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:04:53.019350   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0610 11:04:53.047435   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0610 11:04:53.092622   12440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:04:53.129469   12440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0610 11:04:53.170544   12440 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0610 11:04:53.225214   12440 ssh_runner.go:195] Run: grep 172.17.159.254	control-plane.minikube.internal$ /etc/hosts
	I0610 11:04:53.230321   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:04:53.271797   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:04:53.481656   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:04:53.510196   12440 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100 for IP: 172.17.146.64
	I0610 11:04:53.510196   12440 certs.go:194] generating shared ca certs ...
	I0610 11:04:53.510196   12440 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.510537   12440 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 11:04:53.511300   12440 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 11:04:53.511300   12440 certs.go:256] generating profile certs ...
	I0610 11:04:53.512754   12440 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key
	I0610 11:04:53.513010   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.crt with IP's: []
	I0610 11:04:53.606090   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.crt ...
	I0610 11:04:53.606090   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.crt: {Name:mk2a90b8a3b74b17766eccbbc7eb46ce1b98ceeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.609586   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key ...
	I0610 11:04:53.609586   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key: {Name:mk39c314ca788ad0206c8642c3190c202dbc04c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.611029   12440 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.0d7aa7cf
	I0610 11:04:53.611029   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.0d7aa7cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.146.64 172.17.159.254]
	I0610 11:04:53.731539   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.0d7aa7cf ...
	I0610 11:04:53.731539   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.0d7aa7cf: {Name:mk7a21b8eaf4af1418373c971f9fa2b030f5ba9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.736711   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.0d7aa7cf ...
	I0610 11:04:53.736711   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.0d7aa7cf: {Name:mk60fe7b12e81d355e1985baf98674887649e60b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.737900   12440 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.0d7aa7cf -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt
	I0610 11:04:53.755477   12440 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.0d7aa7cf -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key
	I0610 11:04:53.756752   12440 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key
	I0610 11:04:53.756752   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt with IP's: []
	I0610 11:04:53.875815   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt ...
	I0610 11:04:53.875815   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt: {Name:mk1234aefdfbc9800322c56901472a33ef071cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.883806   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key ...
	I0610 11:04:53.883806   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key: {Name:mke6191aa44f1764991acc108c6dbcfd72efa276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:04:53.885078   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 11:04:53.885078   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 11:04:53.886244   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 11:04:53.886476   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 11:04:53.886702   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 11:04:53.886870   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 11:04:53.887042   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 11:04:53.887190   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 11:04:53.887190   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 11:04:53.897079   12440 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 11:04:53.897079   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 11:04:53.897467   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 11:04:53.897859   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 11:04:53.898264   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 11:04:53.898498   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 11:04:53.898498   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:04:53.899047   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 11:04:53.899361   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 11:04:53.899642   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:04:53.944039   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:04:53.997624   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:04:54.045119   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 11:04:54.091554   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 11:04:54.135936   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 11:04:54.184277   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:04:54.227351   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:04:54.272845   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:04:54.318334   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 11:04:54.367569   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 11:04:54.410275   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:04:54.460823   12440 ssh_runner.go:195] Run: openssl version
	I0610 11:04:54.482114   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:04:54.515962   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:04:54.522457   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:04:54.536667   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:04:54.557960   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:04:54.598422   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 11:04:54.633477   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 11:04:54.641525   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 11:04:54.656122   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 11:04:54.682475   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 11:04:54.718620   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 11:04:54.753843   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 11:04:54.763678   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 11:04:54.775076   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 11:04:54.798882   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:04:54.829193   12440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:04:54.837714   12440 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 11:04:54.837714   12440 kubeadm.go:391] StartCluster: {Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:04:54.846573   12440 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 11:04:54.886378   12440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 11:04:54.920495   12440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:04:54.950571   12440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:04:54.975120   12440 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:04:54.975120   12440 kubeadm.go:156] found existing configuration files:
	
	I0610 11:04:54.986502   12440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:04:55.006300   12440 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:04:55.017815   12440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:04:55.051224   12440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:04:55.072465   12440 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:04:55.085408   12440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:04:55.115640   12440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:04:55.133141   12440 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:04:55.146882   12440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:04:55.172674   12440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:04:55.191555   12440 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:04:55.203515   12440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:04:55.222570   12440 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:04:55.696387   12440 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:05:10.634849   12440 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 11:05:10.635021   12440 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:05:10.635196   12440 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:05:10.635305   12440 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:05:10.635615   12440 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:05:10.635839   12440 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:05:10.639877   12440 out.go:204]   - Generating certificates and keys ...
	I0610 11:05:10.640444   12440 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:05:10.640673   12440 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:05:10.640673   12440 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 11:05:10.640673   12440 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 11:05:10.640673   12440 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 11:05:10.641386   12440 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 11:05:10.641386   12440 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 11:05:10.641386   12440 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-368100 localhost] and IPs [172.17.146.64 127.0.0.1 ::1]
	I0610 11:05:10.641930   12440 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 11:05:10.642092   12440 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-368100 localhost] and IPs [172.17.146.64 127.0.0.1 ::1]
	I0610 11:05:10.642092   12440 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 11:05:10.642092   12440 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 11:05:10.642092   12440 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 11:05:10.642092   12440 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:05:10.642092   12440 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:05:10.642092   12440 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 11:05:10.643198   12440 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:05:10.643387   12440 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:05:10.643387   12440 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:05:10.643387   12440 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:05:10.643387   12440 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:05:10.646726   12440 out.go:204]   - Booting up control plane ...
	I0610 11:05:10.646812   12440 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:05:10.646812   12440 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:05:10.647975   12440 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 11:05:10.647975   12440 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 11:05:10.647975   12440 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.574543ms
	I0610 11:05:10.648708   12440 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 11:05:10.648708   12440 kubeadm.go:309] [api-check] The API server is healthy after 8.003604396s
	I0610 11:05:10.648708   12440 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 11:05:10.648708   12440 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 11:05:10.648708   12440 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 11:05:10.648708   12440 kubeadm.go:309] [mark-control-plane] Marking the node ha-368100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 11:05:10.648708   12440 kubeadm.go:309] [bootstrap-token] Using token: 32k9jv.cizb7zxknrcsuenl
	I0610 11:05:10.652141   12440 out.go:204]   - Configuring RBAC rules ...
	I0610 11:05:10.652141   12440 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 11:05:10.652141   12440 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 11:05:10.653723   12440 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 11:05:10.653723   12440 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 11:05:10.653723   12440 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 11:05:10.653723   12440 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 11:05:10.653723   12440 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 11:05:10.653723   12440 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 11:05:10.653723   12440 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 11:05:10.653723   12440 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 11:05:10.653723   12440 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 11:05:10.653723   12440 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 11:05:10.653723   12440 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 11:05:10.653723   12440 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 32k9jv.cizb7zxknrcsuenl \
	I0610 11:05:10.653723   12440 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 \
	I0610 11:05:10.653723   12440 kubeadm.go:309] 	--control-plane 
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 11:05:10.653723   12440 kubeadm.go:309] 
	I0610 11:05:10.653723   12440 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 32k9jv.cizb7zxknrcsuenl \
	I0610 11:05:10.657964   12440 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 11:05:10.658022   12440 cni.go:84] Creating CNI manager for ""
	I0610 11:05:10.658022   12440 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 11:05:10.658357   12440 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 11:05:10.675642   12440 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 11:05:10.684461   12440 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 11:05:10.684572   12440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 11:05:10.731840   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 11:05:11.343674   12440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 11:05:11.356292   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:11.359401   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-368100 minikube.k8s.io/updated_at=2024_06_10T11_05_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-368100 minikube.k8s.io/primary=true
	I0610 11:05:11.383558   12440 ops.go:34] apiserver oom_adj: -16
	I0610 11:05:11.613539   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:12.128141   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:12.613092   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:13.120355   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:13.612960   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:14.112905   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:14.613297   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:15.116439   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:15.619729   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:16.115745   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:16.615031   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:17.120418   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:17.624669   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:18.113893   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:18.616014   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:19.116938   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:19.629882   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:20.134013   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:20.628247   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:21.122092   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:21.617098   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:22.118191   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:22.616300   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:23.124398   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:23.623705   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:24.127871   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:05:24.270783   12440 kubeadm.go:1107] duration metric: took 12.9269448s to wait for elevateKubeSystemPrivileges
	W0610 11:05:24.270885   12440 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 11:05:24.270885   12440 kubeadm.go:393] duration metric: took 29.4329292s to StartCluster
	I0610 11:05:24.270885   12440 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:05:24.271057   12440 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:05:24.273195   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:05:24.274615   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 11:05:24.274870   12440 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:05:24.274926   12440 start.go:240] waiting for startup goroutines ...
	I0610 11:05:24.274870   12440 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 11:05:24.275091   12440 addons.go:69] Setting storage-provisioner=true in profile "ha-368100"
	I0610 11:05:24.275197   12440 addons.go:234] Setting addon storage-provisioner=true in "ha-368100"
	I0610 11:05:24.275394   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:05:24.276816   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:05:24.277348   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:24.278197   12440 addons.go:69] Setting default-storageclass=true in profile "ha-368100"
	I0610 11:05:24.278197   12440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-368100"
	I0610 11:05:24.279108   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:24.420262   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 11:05:24.941079   12440 start.go:946] {"host.minikube.internal": 172.17.144.1} host record injected into CoreDNS's ConfigMap
	I0610 11:05:26.686354   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:26.686354   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:26.699940   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:26.699940   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:26.706445   12440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:05:26.700411   12440 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:05:26.709423   12440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:05:26.709423   12440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 11:05:26.709521   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:26.709521   12440 kapi.go:59] client config for ha-368100: &rest.Config{Host:"https://172.17.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 11:05:26.710832   12440 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 11:05:26.711781   12440 addons.go:234] Setting addon default-storageclass=true in "ha-368100"
	I0610 11:05:26.711781   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:05:26.712636   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:29.169044   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:29.169044   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:29.172503   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:05:29.226862   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:29.226862   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:29.231269   12440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 11:05:29.231269   12440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 11:05:29.231269   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:05:31.594701   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:05:31.594764   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:31.594825   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:05:32.027481   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:05:32.027481   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:32.028489   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:05:32.184636   12440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:05:34.372363   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:05:34.384229   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:34.384708   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:05:34.511307   12440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 11:05:34.664026   12440 round_trippers.go:463] GET https://172.17.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 11:05:34.664026   12440 round_trippers.go:469] Request Headers:
	I0610 11:05:34.664026   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:05:34.664026   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:05:34.679930   12440 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0610 11:05:34.680575   12440 round_trippers.go:463] PUT https://172.17.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 11:05:34.680575   12440 round_trippers.go:469] Request Headers:
	I0610 11:05:34.680575   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:05:34.680575   12440 round_trippers.go:473]     Content-Type: application/json
	I0610 11:05:34.680575   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:05:34.683700   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:05:34.688602   12440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 11:05:34.692355   12440 addons.go:510] duration metric: took 10.4174649s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0610 11:05:34.692355   12440 start.go:245] waiting for cluster config update ...
	I0610 11:05:34.692355   12440 start.go:254] writing updated cluster config ...
	I0610 11:05:34.695813   12440 out.go:177] 
	I0610 11:05:34.706396   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:05:34.706716   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:05:34.713641   12440 out.go:177] * Starting "ha-368100-m02" control-plane node in "ha-368100" cluster
	I0610 11:05:34.715656   12440 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 11:05:34.715656   12440 cache.go:56] Caching tarball of preloaded images
	I0610 11:05:34.715656   12440 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 11:05:34.715656   12440 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 11:05:34.715656   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:05:34.720412   12440 start.go:360] acquireMachinesLock for ha-368100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:05:34.720548   12440 start.go:364] duration metric: took 136.4µs to acquireMachinesLock for "ha-368100-m02"
	I0610 11:05:34.720775   12440 start.go:93] Provisioning new machine with config: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:05:34.720775   12440 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0610 11:05:34.721589   12440 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 11:05:34.721589   12440 start.go:159] libmachine.API.Create for "ha-368100" (driver="hyperv")
	I0610 11:05:34.725091   12440 client.go:168] LocalClient.Create starting
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 11:05:34.725134   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:05:34.726267   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:05:34.726267   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 11:05:36.669146   12440 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 11:05:36.669146   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:36.678261   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 11:05:38.459777   12440 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 11:05:38.459777   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:38.463128   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:05:40.029214   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:05:40.029214   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:40.029214   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:05:43.779374   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:05:43.791301   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:43.794327   12440 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 11:05:44.294743   12440 main.go:141] libmachine: Creating SSH key...
	I0610 11:05:44.754791   12440 main.go:141] libmachine: Creating VM...
	I0610 11:05:44.754791   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:05:47.715940   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:05:47.715940   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:47.715940   12440 main.go:141] libmachine: Using switch "Default Switch"
	I0610 11:05:47.728151   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:05:49.523178   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:05:49.523178   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:49.523178   12440 main.go:141] libmachine: Creating VHD
	I0610 11:05:49.532162   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 11:05:53.455885   12440 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5890BBC2-C159-4447-8A45-AC73CC907BB4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 11:05:53.455885   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:53.455885   12440 main.go:141] libmachine: Writing magic tar header
	I0610 11:05:53.467821   12440 main.go:141] libmachine: Writing SSH key tar header
	I0610 11:05:53.477568   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 11:05:56.705478   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:05:56.716394   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:56.716394   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\disk.vhd' -SizeBytes 20000MB
	I0610 11:05:59.329886   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:05:59.342333   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:05:59.342333   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-368100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 11:06:03.095745   12440 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-368100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 11:06:03.101476   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:03.101476   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-368100-m02 -DynamicMemoryEnabled $false
	I0610 11:06:05.414556   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:05.414556   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:05.425452   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-368100-m02 -Count 2
	I0610 11:06:07.710644   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:07.710644   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:07.710644   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-368100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\boot2docker.iso'
	I0610 11:06:10.402317   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:10.413282   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:10.413282   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-368100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\disk.vhd'
	I0610 11:06:13.211502   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:13.211502   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:13.211502   12440 main.go:141] libmachine: Starting VM...
	I0610 11:06:13.211502   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-368100-m02
	I0610 11:06:16.386038   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:16.386038   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:16.386038   12440 main.go:141] libmachine: Waiting for host to start...
	I0610 11:06:16.388127   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:18.790351   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:18.796013   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:18.796013   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:21.455999   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:21.456057   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:22.469472   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:24.786678   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:24.786678   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:24.786743   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:27.449360   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:27.449399   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:28.463355   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:30.805837   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:30.805837   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:30.805837   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:33.487310   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:33.487352   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:34.500449   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:36.802224   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:36.802224   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:36.805010   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:39.495387   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:06:39.497780   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:40.513112   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:42.939759   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:42.939759   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:42.941066   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:45.661914   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:06:45.661914   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:45.661914   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:47.944336   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:47.944336   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:47.946867   12440 machine.go:94] provisionDockerMachine start ...
	I0610 11:06:47.946867   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:50.270564   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:50.270564   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:50.270564   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:52.985488   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:06:52.985488   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:52.993231   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:06:53.002410   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:06:53.002410   12440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:06:53.149356   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:06:53.149356   12440 buildroot.go:166] provisioning hostname "ha-368100-m02"
	I0610 11:06:53.149356   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:06:55.398079   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:06:55.398079   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:55.406195   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:06:58.011924   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:06:58.023354   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:06:58.028860   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:06:58.029648   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:06:58.029648   12440 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-368100-m02 && echo "ha-368100-m02" | sudo tee /etc/hostname
	I0610 11:06:58.200876   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-368100-m02
	
	I0610 11:06:58.200876   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:00.433029   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:00.436943   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:00.436943   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:03.072094   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:03.072094   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:03.084613   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:03.084791   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:03.084791   12440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-368100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-368100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-368100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:07:03.237163   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:07:03.237163   12440 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 11:07:03.237163   12440 buildroot.go:174] setting up certificates
	I0610 11:07:03.237163   12440 provision.go:84] configureAuth start
	I0610 11:07:03.237163   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:05.446322   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:05.458015   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:05.458393   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:08.150210   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:08.162252   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:08.162252   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:10.398427   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:10.398427   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:10.410604   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:13.113397   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:13.113397   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:13.113397   12440 provision.go:143] copyHostCerts
	I0610 11:07:13.113397   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 11:07:13.114086   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 11:07:13.114146   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 11:07:13.114299   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 11:07:13.115590   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 11:07:13.115817   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 11:07:13.115817   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 11:07:13.115817   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 11:07:13.117059   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 11:07:13.117433   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 11:07:13.117433   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 11:07:13.117509   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 11:07:13.118729   12440 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-368100-m02 san=[127.0.0.1 172.17.157.100 ha-368100-m02 localhost minikube]
	I0610 11:07:13.482499   12440 provision.go:177] copyRemoteCerts
	I0610 11:07:13.492832   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:07:13.492832   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:15.747120   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:15.759013   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:15.759013   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:18.415303   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:18.415303   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:18.426918   12440 sshutil.go:53] new ssh client: &{IP:172.17.157.100 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\id_rsa Username:docker}
	I0610 11:07:18.542108   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.049172s)
	I0610 11:07:18.542108   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 11:07:18.542108   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:07:18.590588   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 11:07:18.590839   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 11:07:18.639326   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 11:07:18.639515   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:07:18.693044   12440 provision.go:87] duration metric: took 15.4556989s to configureAuth
	I0610 11:07:18.693044   12440 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:07:18.693672   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:07:18.693672   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:20.932378   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:20.932378   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:20.943502   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:23.623583   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:23.623583   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:23.641572   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:23.642117   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:23.642117   12440 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 11:07:23.782636   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 11:07:23.782636   12440 buildroot.go:70] root file system type: tmpfs
	I0610 11:07:23.782636   12440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 11:07:23.782636   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:25.990128   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:26.000869   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:26.000977   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:28.660797   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:28.660797   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:28.677498   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:28.678024   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:28.678193   12440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.146.64"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 11:07:28.850877   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.146.64
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 11:07:28.850877   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:31.120865   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:31.129265   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:31.129649   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:33.782449   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:33.794517   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:33.799879   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:33.800495   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:33.801097   12440 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 11:07:35.960455   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 11:07:35.960455   12440 machine.go:97] duration metric: took 48.0131942s to provisionDockerMachine
	I0610 11:07:35.960455   12440 client.go:171] duration metric: took 2m1.2343262s to LocalClient.Create
	I0610 11:07:35.960455   12440 start.go:167] duration metric: took 2m1.2378712s to libmachine.API.Create "ha-368100"
	I0610 11:07:35.960455   12440 start.go:293] postStartSetup for "ha-368100-m02" (driver="hyperv")
	I0610 11:07:35.960455   12440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:07:35.975212   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:07:35.975212   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:38.214881   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:38.214881   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:38.214881   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:40.945057   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:40.956843   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:40.957304   12440 sshutil.go:53] new ssh client: &{IP:172.17.157.100 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\id_rsa Username:docker}
	I0610 11:07:41.067052   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0916627s)
	I0610 11:07:41.078391   12440 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:07:41.087283   12440 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:07:41.087283   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 11:07:41.087466   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 11:07:41.088509   12440 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 11:07:41.088592   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 11:07:41.098643   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:07:41.124772   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 11:07:41.182046   12440 start.go:296] duration metric: took 5.2215482s for postStartSetup
	I0610 11:07:41.184646   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:43.487837   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:43.487837   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:43.499562   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:46.146918   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:46.146918   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:46.158701   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:07:46.161348   12440 start.go:128] duration metric: took 2m11.4394948s to createHost
	I0610 11:07:46.161450   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:48.397158   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:48.397158   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:48.397248   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:51.072071   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:51.072071   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:51.088369   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:51.088505   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:51.088505   12440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:07:51.225224   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718017671.222315860
	
	I0610 11:07:51.225359   12440 fix.go:216] guest clock: 1718017671.222315860
	I0610 11:07:51.225359   12440 fix.go:229] Guest: 2024-06-10 11:07:51.22231586 +0000 UTC Remote: 2024-06-10 11:07:46.1614505 +0000 UTC m=+349.304317201 (delta=5.06086536s)
	I0610 11:07:51.225472   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:53.455300   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:53.466243   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:53.466243   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:07:56.078244   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:07:56.078244   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:56.095662   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:07:56.096356   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.157.100 22 <nil> <nil>}
	I0610 11:07:56.096356   12440 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718017671
	I0610 11:07:56.261460   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 11:07:51 UTC 2024
	
	I0610 11:07:56.261485   12440 fix.go:236] clock set: Mon Jun 10 11:07:51 UTC 2024
	 (err=<nil>)
	I0610 11:07:56.261485   12440 start.go:83] releasing machines lock for "ha-368100-m02", held for 2m21.5396776s
	I0610 11:07:56.261485   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:07:58.548735   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:07:58.560574   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:07:58.560574   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:01.185177   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:08:01.185177   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:01.199882   12440 out.go:177] * Found network options:
	I0610 11:08:01.202263   12440 out.go:177]   - NO_PROXY=172.17.146.64
	W0610 11:08:01.204329   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 11:08:01.210018   12440 out.go:177]   - NO_PROXY=172.17.146.64
	W0610 11:08:01.212187   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:08:01.214359   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 11:08:01.215592   12440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:08:01.215592   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:08:01.227487   12440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 11:08:01.227487   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m02 ).state
	I0610 11:08:03.502728   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:03.502784   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:03.502784   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:03.515565   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:03.515565   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:03.515565   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:06.284817   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:08:06.284881   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:06.284881   12440 sshutil.go:53] new ssh client: &{IP:172.17.157.100 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\id_rsa Username:docker}
	I0610 11:08:06.310575   12440 main.go:141] libmachine: [stdout =====>] : 172.17.157.100
	
	I0610 11:08:06.310575   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:06.310575   12440 sshutil.go:53] new ssh client: &{IP:172.17.157.100 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m02\id_rsa Username:docker}
	I0610 11:08:06.436788   12440 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2092589s)
	W0610 11:08:06.436788   12440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:08:06.436788   12440 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2211535s)
	I0610 11:08:06.449762   12440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:08:06.490382   12440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:08:06.490382   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:08:06.490382   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:08:06.546536   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 11:08:06.580658   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 11:08:06.601189   12440 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 11:08:06.615653   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 11:08:06.649669   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:08:06.681082   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 11:08:06.715149   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:08:06.751606   12440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:08:06.789191   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 11:08:06.823439   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 11:08:06.858778   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 11:08:06.901030   12440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:08:06.933931   12440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:08:06.964057   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:07.189106   12440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 11:08:07.229555   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:08:07.246311   12440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 11:08:07.286747   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:08:07.334693   12440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:08:07.384197   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:08:07.424748   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:08:07.463491   12440 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 11:08:07.531605   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:08:07.564024   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:08:07.612157   12440 ssh_runner.go:195] Run: which cri-dockerd
	I0610 11:08:07.633211   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 11:08:07.653176   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 11:08:07.701670   12440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 11:08:07.929897   12440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 11:08:08.136834   12440 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 11:08:08.137399   12440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 11:08:08.188562   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:08.387661   12440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 11:08:10.929454   12440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5417042s)
	I0610 11:08:10.940332   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 11:08:10.978323   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:08:11.020669   12440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 11:08:11.244716   12440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 11:08:11.472655   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:11.693912   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 11:08:11.742600   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:08:11.785246   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:12.008591   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 11:08:12.124867   12440 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 11:08:12.136514   12440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 11:08:12.146196   12440 start.go:562] Will wait 60s for crictl version
	I0610 11:08:12.158153   12440 ssh_runner.go:195] Run: which crictl
	I0610 11:08:12.179929   12440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:08:12.239786   12440 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 11:08:12.249568   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:08:12.296044   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:08:12.336132   12440 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 11:08:12.339347   12440 out.go:177]   - env NO_PROXY=172.17.146.64
	I0610 11:08:12.342318   12440 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 11:08:12.346320   12440 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 11:08:12.346320   12440 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 11:08:12.346320   12440 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 11:08:12.346320   12440 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 11:08:12.349468   12440 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 11:08:12.349468   12440 ip.go:210] interface addr: 172.17.144.1/20
	I0610 11:08:12.360325   12440 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 11:08:12.367859   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:08:12.395300   12440 mustload.go:65] Loading cluster: ha-368100
	I0610 11:08:12.396165   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:08:12.396560   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:08:14.717593   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:14.717593   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:14.717593   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:08:14.719322   12440 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100 for IP: 172.17.157.100
	I0610 11:08:14.719322   12440 certs.go:194] generating shared ca certs ...
	I0610 11:08:14.719322   12440 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:08:14.719968   12440 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 11:08:14.720261   12440 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 11:08:14.720577   12440 certs.go:256] generating profile certs ...
	I0610 11:08:14.721281   12440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key
	I0610 11:08:14.721466   12440 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.5909f899
	I0610 11:08:14.721621   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.5909f899 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.146.64 172.17.157.100 172.17.159.254]
	I0610 11:08:14.863861   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.5909f899 ...
	I0610 11:08:14.863861   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.5909f899: {Name:mk463dc3dcad723bb6b1c6d1738104e2013b59d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:08:14.865820   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.5909f899 ...
	I0610 11:08:14.865820   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.5909f899: {Name:mke0c4b1f4fcbf88f651555043d45504a3e9dcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:08:14.866281   12440 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.5909f899 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt
	I0610 11:08:14.881196   12440 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.5909f899 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key
	I0610 11:08:14.882831   12440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key
	I0610 11:08:14.882831   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 11:08:14.883111   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 11:08:14.883295   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 11:08:14.883482   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 11:08:14.883618   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 11:08:14.883769   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 11:08:14.884062   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 11:08:14.884205   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 11:08:14.884820   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 11:08:14.885081   12440 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 11:08:14.885253   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 11:08:14.885278   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 11:08:14.885278   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 11:08:14.885855   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 11:08:14.886431   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 11:08:14.886678   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:08:14.886909   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 11:08:14.886909   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 11:08:14.886909   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:08:17.174617   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:17.175321   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:17.175321   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:19.955060   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:08:19.956119   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:19.956202   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:08:20.063801   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0610 11:08:20.072105   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0610 11:08:20.112095   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0610 11:08:20.119826   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0610 11:08:20.160902   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0610 11:08:20.169321   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0610 11:08:20.210037   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0610 11:08:20.218213   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0610 11:08:20.250069   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0610 11:08:20.256337   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0610 11:08:20.295355   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0610 11:08:20.302760   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0610 11:08:20.326515   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:08:20.377632   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:08:20.430002   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:08:20.480385   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 11:08:20.534108   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0610 11:08:20.588324   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 11:08:20.641818   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:08:20.692709   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:08:20.744729   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:08:20.796800   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 11:08:20.849827   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 11:08:20.896970   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0610 11:08:20.932791   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0610 11:08:20.973764   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0610 11:08:21.009499   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0610 11:08:21.053301   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0610 11:08:21.091096   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0610 11:08:21.125710   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0610 11:08:21.177619   12440 ssh_runner.go:195] Run: openssl version
	I0610 11:08:21.199444   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:08:21.232216   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:08:21.239772   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:08:21.251990   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:08:21.278629   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:08:21.320430   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 11:08:21.364801   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 11:08:21.372486   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 11:08:21.388577   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 11:08:21.412501   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 11:08:21.453160   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 11:08:21.489863   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 11:08:21.498837   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 11:08:21.515354   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 11:08:21.540785   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:08:21.575522   12440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:08:21.585702   12440 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 11:08:21.585702   12440 kubeadm.go:928] updating node {m02 172.17.157.100 8443 v1.30.1 docker true true} ...
	I0610 11:08:21.586335   12440 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-368100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.157.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:08:21.586413   12440 kube-vip.go:115] generating kube-vip config ...
	I0610 11:08:21.601305   12440 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 11:08:21.632532   12440 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 11:08:21.632532   12440 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 11:08:21.645822   12440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:08:21.666493   12440 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 11:08:21.680908   12440 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 11:08:21.709502   12440 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0610 11:08:21.709629   12440 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0610 11:08:21.709690   12440 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0610 11:08:22.813561   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:08:22.821563   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:08:22.836520   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 11:08:22.836520   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 11:08:22.894365   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:08:22.905292   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:08:22.935951   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 11:08:22.936099   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 11:08:23.188411   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:08:23.259018   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:08:23.271007   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:08:23.291020   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 11:08:23.291435   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 11:08:24.311912   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0610 11:08:24.334287   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0610 11:08:24.367044   12440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:08:24.405499   12440 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 11:08:24.452486   12440 ssh_runner.go:195] Run: grep 172.17.159.254	control-plane.minikube.internal$ /etc/hosts
	I0610 11:08:24.461431   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:08:24.498123   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:08:24.715537   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:08:24.750259   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:08:24.751253   12440 start.go:316] joinCluster: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:08:24.751454   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 11:08:24.751750   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:08:27.081776   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:08:27.081862   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:27.081937   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:08:29.819955   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:08:29.819990   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:08:29.820513   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:08:30.229641   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.477904s)
	I0610 11:08:30.229641   12440 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:08:30.229641   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lrj7dv.bzd7vf2qmy1wuf5a --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-368100-m02 --control-plane --apiserver-advertise-address=172.17.157.100 --apiserver-bind-port=8443"
	I0610 11:09:14.574915   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lrj7dv.bzd7vf2qmy1wuf5a --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-368100-m02 --control-plane --apiserver-advertise-address=172.17.157.100 --apiserver-bind-port=8443": (44.3449062s)
	I0610 11:09:14.574915   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 11:09:15.486935   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-368100-m02 minikube.k8s.io/updated_at=2024_06_10T11_09_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-368100 minikube.k8s.io/primary=false
	I0610 11:09:15.674773   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-368100-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0610 11:09:15.861978   12440 start.go:318] duration metric: took 51.110358s to joinCluster
	I0610 11:09:15.861978   12440 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:09:15.864907   12440 out.go:177] * Verifying Kubernetes components...
	I0610 11:09:15.862695   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:09:15.881503   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:09:16.370736   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:09:16.411082   12440 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:09:16.411951   12440 kapi.go:59] client config for ha-368100: &rest.Config{Host:"https://172.17.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0610 11:09:16.412083   12440 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.159.254:8443 with https://172.17.146.64:8443
	I0610 11:09:16.412897   12440 node_ready.go:35] waiting up to 6m0s for node "ha-368100-m02" to be "Ready" ...
	I0610 11:09:16.413105   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:16.413238   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:16.413340   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:16.413340   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:16.434932   12440 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0610 11:09:16.914144   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:16.914144   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:16.914144   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:16.914144   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:16.920838   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:17.420633   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:17.420633   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:17.420633   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:17.420633   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:17.431628   12440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 11:09:17.913617   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:17.913778   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:17.913778   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:17.913778   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:17.920236   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:18.418826   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:18.419012   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:18.419012   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:18.419012   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:18.426372   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:09:18.426372   12440 node_ready.go:53] node "ha-368100-m02" has status "Ready":"False"
	I0610 11:09:18.913581   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:18.913581   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:18.913581   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:18.913581   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:18.918510   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:19.422373   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:19.422373   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:19.422373   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:19.422373   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:19.636492   12440 round_trippers.go:574] Response Status: 200 OK in 214 milliseconds
	I0610 11:09:19.913557   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:19.913622   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:19.913729   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:19.913729   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:19.919173   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:20.422852   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:20.422852   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:20.422852   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:20.422852   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:20.482664   12440 round_trippers.go:574] Response Status: 200 OK in 59 milliseconds
	I0610 11:09:20.483559   12440 node_ready.go:53] node "ha-368100-m02" has status "Ready":"False"
	I0610 11:09:20.913578   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:20.913712   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:20.913712   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:20.913712   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:20.918544   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:21.417231   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:21.417231   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:21.417350   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:21.417350   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:21.422233   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:21.923076   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:21.923076   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:21.923076   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:21.923076   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:21.932535   12440 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:09:22.427678   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:22.427678   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.427678   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.427678   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.434112   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:22.434716   12440 node_ready.go:49] node "ha-368100-m02" has status "Ready":"True"
	I0610 11:09:22.434716   12440 node_ready.go:38] duration metric: took 6.0217026s for node "ha-368100-m02" to be "Ready" ...
	I0610 11:09:22.434947   12440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:09:22.435043   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:22.435043   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.435043   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.435043   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.448463   12440 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0610 11:09:22.457470   12440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.457470   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2jsrh
	I0610 11:09:22.457470   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.457470   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.457470   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.461460   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:22.462608   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:22.462667   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.462667   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.462667   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.465464   12440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:09:22.466998   12440 pod_ready.go:92] pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:22.466998   12440 pod_ready.go:81] duration metric: took 9.5284ms for pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.466998   12440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.466998   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dl8r2
	I0610 11:09:22.466998   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.466998   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.466998   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.471730   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:22.472793   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:22.472793   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.472793   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.472793   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.477353   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:22.477353   12440 pod_ready.go:92] pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:22.478349   12440 pod_ready.go:81] duration metric: took 11.3503ms for pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.478349   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.478349   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100
	I0610 11:09:22.478349   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.478349   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.478349   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.481361   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:22.482349   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:22.482349   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.482349   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.482349   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.485352   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:22.486357   12440 pod_ready.go:92] pod "etcd-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:22.486357   12440 pod_ready.go:81] duration metric: took 8.0081ms for pod "etcd-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.486357   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:22.486357   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:22.486357   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.486357   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.486357   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.492345   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:22.493348   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:22.493348   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.493348   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.493348   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.497392   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:22.992283   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:22.992361   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.992478   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.992478   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:22.996975   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:22.998706   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:22.998779   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:22.998779   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:22.998779   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:23.005659   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:23.491589   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:23.491589   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:23.491589   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:23.491589   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:23.496186   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:23.497244   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:23.497244   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:23.497244   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:23.497244   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:23.501637   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:23.992874   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:23.992874   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:23.992874   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:23.992874   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:23.997949   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:23.998825   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:23.998893   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:23.998893   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:23.998893   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.003517   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:24.492575   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:09:24.492633   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.492633   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.492633   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.497589   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:24.498570   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:24.498570   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.498570   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.498570   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.504125   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:24.504664   12440 pod_ready.go:92] pod "etcd-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:24.504814   12440 pod_ready.go:81] duration metric: took 2.0184399s for pod "etcd-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:24.504885   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:24.504989   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100
	I0610 11:09:24.504989   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.504989   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.505051   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.510658   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:24.511997   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:24.511997   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.511997   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.511997   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.516632   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:24.516632   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:24.516632   12440 pod_ready.go:81] duration metric: took 11.7469ms for pod "kube-apiserver-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:24.516632   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:24.631908   12440 request.go:629] Waited for 114.1279ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:24.632101   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:24.632101   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.632101   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.632155   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.636072   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:09:24.835240   12440 request.go:629] Waited for 197.3151ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:24.835442   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:24.835442   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:24.835442   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:24.835442   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:24.841236   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:25.038242   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:25.038361   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:25.038361   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:25.038361   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:25.044066   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:25.241512   12440 request.go:629] Waited for 195.7883ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:25.241512   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:25.241512   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:25.241512   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:25.241512   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:25.247902   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:25.520825   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:25.520825   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:25.520825   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:25.520825   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:25.526612   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:25.630261   12440 request.go:629] Waited for 102.4279ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:25.630483   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:25.630483   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:25.630483   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:25.630483   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:25.639171   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:09:26.020666   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:26.020666   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:26.020666   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:26.020666   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:26.026576   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:26.036565   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:26.036565   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:26.036565   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:26.036565   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:26.041394   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:26.521768   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:26.521903   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:26.521903   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:26.521903   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:26.537341   12440 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 11:09:26.538454   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:26.538509   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:26.538509   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:26.538542   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:26.542643   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:26.542643   12440 pod_ready.go:102] pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace has status "Ready":"False"
	I0610 11:09:27.022950   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:09:27.023259   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.023259   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.023259   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.029548   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:27.033959   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:27.033959   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.033959   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.033959   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.040011   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:27.040011   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:27.040553   12440 pod_ready.go:81] duration metric: took 2.5238994s for pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.040553   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.040726   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100
	I0610 11:09:27.040726   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.040726   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.040726   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.046831   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:27.230680   12440 request.go:629] Waited for 182.3585ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:27.230933   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:27.230933   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.231004   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.231004   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.236183   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:09:27.236740   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:27.236740   12440 pod_ready.go:81] duration metric: took 196.1854ms for pod "kube-controller-manager-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.236740   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.435687   12440 request.go:629] Waited for 198.6495ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m02
	I0610 11:09:27.436037   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m02
	I0610 11:09:27.436037   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.436037   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.436037   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.441978   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:27.638174   12440 request.go:629] Waited for 195.0451ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:27.638440   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:27.638505   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.638505   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.638505   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.646113   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:09:27.646473   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:27.646473   12440 pod_ready.go:81] duration metric: took 409.7302ms for pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.646473   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2j65l" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:27.828796   12440 request.go:629] Waited for 182.1216ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2j65l
	I0610 11:09:27.829019   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2j65l
	I0610 11:09:27.829019   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:27.829019   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:27.829019   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:27.837564   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:09:28.033108   12440 request.go:629] Waited for 194.3664ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:28.033108   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:28.033108   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.033108   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.033108   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.041736   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:09:28.041848   12440 pod_ready.go:92] pod "kube-proxy-2j65l" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:28.041848   12440 pod_ready.go:81] duration metric: took 395.3716ms for pod "kube-proxy-2j65l" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.041848   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2mwxs" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.236904   12440 request.go:629] Waited for 195.0542ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mwxs
	I0610 11:09:28.237012   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mwxs
	I0610 11:09:28.237131   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.237131   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.237131   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.243182   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:28.441001   12440 request.go:629] Waited for 196.3973ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:28.441319   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:28.441319   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.441319   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.441319   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.447670   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:09:28.448111   12440 pod_ready.go:92] pod "kube-proxy-2mwxs" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:28.448111   12440 pod_ready.go:81] duration metric: took 406.2595ms for pod "kube-proxy-2mwxs" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.448111   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.628076   12440 request.go:629] Waited for 179.9635ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100
	I0610 11:09:28.628246   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100
	I0610 11:09:28.628246   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.628246   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.628246   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.633957   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:28.831371   12440 request.go:629] Waited for 195.972ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:28.831811   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:09:28.831855   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:28.831855   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:28.831855   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:28.837690   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:28.839448   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:28.839448   12440 pod_ready.go:81] duration metric: took 391.334ms for pod "kube-scheduler-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:28.839548   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:29.038376   12440 request.go:629] Waited for 197.7953ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m02
	I0610 11:09:29.038376   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m02
	I0610 11:09:29.038376   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.038376   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.038376   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.050723   12440 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0610 11:09:29.240927   12440 request.go:629] Waited for 189.2434ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:29.241012   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:09:29.241101   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.241135   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.241135   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.246417   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:29.248103   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:09:29.248103   12440 pod_ready.go:81] duration metric: took 408.5514ms for pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:09:29.248103   12440 pod_ready.go:38] duration metric: took 6.8130992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:09:29.248103   12440 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:09:29.260711   12440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:09:29.298319   12440 api_server.go:72] duration metric: took 13.4362301s to wait for apiserver process to appear ...
	I0610 11:09:29.298319   12440 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:09:29.298319   12440 api_server.go:253] Checking apiserver healthz at https://172.17.146.64:8443/healthz ...
	I0610 11:09:29.308542   12440 api_server.go:279] https://172.17.146.64:8443/healthz returned 200:
	ok
	I0610 11:09:29.308899   12440 round_trippers.go:463] GET https://172.17.146.64:8443/version
	I0610 11:09:29.308993   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.308993   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.308993   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.310351   12440 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 11:09:29.310582   12440 api_server.go:141] control plane version: v1.30.1
	I0610 11:09:29.310702   12440 api_server.go:131] duration metric: took 12.3827ms to wait for apiserver health ...
	I0610 11:09:29.310702   12440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:09:29.428188   12440 request.go:629] Waited for 117.2244ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:29.428188   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:29.428188   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.428188   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.428188   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.439791   12440 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 11:09:29.449427   12440 system_pods.go:59] 17 kube-system pods found
	I0610 11:09:29.449427   12440 system_pods.go:61] "coredns-7db6d8ff4d-2jsrh" [eec90043-8c22-4041-a178-266148b8368e] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "coredns-7db6d8ff4d-dl8r2" [39350017-f3e1-44ea-a786-c03ee7a0fd8e] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "etcd-ha-368100" [a8a99351-89b1-4e87-a251-e8735df617cc] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "etcd-ha-368100-m02" [fa26841e-b79d-483a-b723-3654fde31626] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kindnet-g66bp" [aeebb510-5026-4062-95d8-be966524f934] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kindnet-qk4fv" [3687f8c4-d986-4023-a2ad-98aa6d4ddd15] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-apiserver-ha-368100" [60620b18-7050-463c-b761-9d89caea2869] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-apiserver-ha-368100-m02" [b0105503-1e6b-4d83-a2ff-c921f7916ceb] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-controller-manager-ha-368100" [a1e4d3d6-ff46-4f52-b5ff-fdad20389b34] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-controller-manager-ha-368100-m02" [18ffec1a-6bb3-4236-98f4-88e03d83516b] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-proxy-2j65l" [dfd9f031-9a9e-46fc-ad2f-b0d61e7d7034] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-proxy-2mwxs" [4ba43598-8c67-43cc-b17a-7d7fbd835edc] Running
	I0610 11:09:29.449427   12440 system_pods.go:61] "kube-scheduler-ha-368100" [ac6c4d94-e6c2-4e43-b8ea-7819597ff572] Running
	I0610 11:09:29.452078   12440 system_pods.go:61] "kube-scheduler-ha-368100-m02" [3d706715-7b39-4a07-ad0d-2e91b0476ac7] Running
	I0610 11:09:29.452078   12440 system_pods.go:61] "kube-vip-ha-368100" [fbd9ab1c-c5b6-4b14-b4a7-8da5a58285b4] Running
	I0610 11:09:29.452078   12440 system_pods.go:61] "kube-vip-ha-368100-m02" [2e64f8be-5d5f-41ba-b4c8-9f3623e9efc6] Running
	I0610 11:09:29.452078   12440 system_pods.go:61] "storage-provisioner" [853aab4d-2671-43fd-a221-0966d875b568] Running
	I0610 11:09:29.452078   12440 system_pods.go:74] duration metric: took 141.2633ms to wait for pod list to return data ...
	I0610 11:09:29.452078   12440 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:09:29.629239   12440 request.go:629] Waited for 176.826ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/default/serviceaccounts
	I0610 11:09:29.629239   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/default/serviceaccounts
	I0610 11:09:29.629239   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.629239   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.629239   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.634712   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:29.635132   12440 default_sa.go:45] found service account: "default"
	I0610 11:09:29.635222   12440 default_sa.go:55] duration metric: took 183.1417ms for default service account to be created ...
	I0610 11:09:29.635222   12440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:09:29.832138   12440 request.go:629] Waited for 196.5758ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:29.832387   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:09:29.832387   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:29.832387   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:29.832387   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:29.841140   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:09:29.848577   12440 system_pods.go:86] 17 kube-system pods found
	I0610 11:09:29.848577   12440 system_pods.go:89] "coredns-7db6d8ff4d-2jsrh" [eec90043-8c22-4041-a178-266148b8368e] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "coredns-7db6d8ff4d-dl8r2" [39350017-f3e1-44ea-a786-c03ee7a0fd8e] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "etcd-ha-368100" [a8a99351-89b1-4e87-a251-e8735df617cc] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "etcd-ha-368100-m02" [fa26841e-b79d-483a-b723-3654fde31626] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kindnet-g66bp" [aeebb510-5026-4062-95d8-be966524f934] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kindnet-qk4fv" [3687f8c4-d986-4023-a2ad-98aa6d4ddd15] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kube-apiserver-ha-368100" [60620b18-7050-463c-b761-9d89caea2869] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kube-apiserver-ha-368100-m02" [b0105503-1e6b-4d83-a2ff-c921f7916ceb] Running
	I0610 11:09:29.848577   12440 system_pods.go:89] "kube-controller-manager-ha-368100" [a1e4d3d6-ff46-4f52-b5ff-fdad20389b34] Running
	I0610 11:09:29.849128   12440 system_pods.go:89] "kube-controller-manager-ha-368100-m02" [18ffec1a-6bb3-4236-98f4-88e03d83516b] Running
	I0610 11:09:29.849128   12440 system_pods.go:89] "kube-proxy-2j65l" [dfd9f031-9a9e-46fc-ad2f-b0d61e7d7034] Running
	I0610 11:09:29.849128   12440 system_pods.go:89] "kube-proxy-2mwxs" [4ba43598-8c67-43cc-b17a-7d7fbd835edc] Running
	I0610 11:09:29.849128   12440 system_pods.go:89] "kube-scheduler-ha-368100" [ac6c4d94-e6c2-4e43-b8ea-7819597ff572] Running
	I0610 11:09:29.849200   12440 system_pods.go:89] "kube-scheduler-ha-368100-m02" [3d706715-7b39-4a07-ad0d-2e91b0476ac7] Running
	I0610 11:09:29.849232   12440 system_pods.go:89] "kube-vip-ha-368100" [fbd9ab1c-c5b6-4b14-b4a7-8da5a58285b4] Running
	I0610 11:09:29.849232   12440 system_pods.go:89] "kube-vip-ha-368100-m02" [2e64f8be-5d5f-41ba-b4c8-9f3623e9efc6] Running
	I0610 11:09:29.849232   12440 system_pods.go:89] "storage-provisioner" [853aab4d-2671-43fd-a221-0966d875b568] Running
	I0610 11:09:29.849232   12440 system_pods.go:126] duration metric: took 214.009ms to wait for k8s-apps to be running ...
	I0610 11:09:29.849232   12440 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:09:29.860301   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:09:29.898063   12440 system_svc.go:56] duration metric: took 48.83ms WaitForService to wait for kubelet
	I0610 11:09:29.898063   12440 kubeadm.go:576] duration metric: took 14.0359687s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:09:29.898063   12440 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:09:30.038515   12440 request.go:629] Waited for 140.4513ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes
	I0610 11:09:30.038515   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes
	I0610 11:09:30.038515   12440 round_trippers.go:469] Request Headers:
	I0610 11:09:30.038515   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:09:30.038515   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:09:30.044181   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:09:30.044181   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:09:30.044181   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:09:30.044181   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:09:30.044181   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:09:30.044181   12440 node_conditions.go:105] duration metric: took 146.1173ms to run NodePressure ...
	I0610 11:09:30.044181   12440 start.go:240] waiting for startup goroutines ...
	I0610 11:09:30.044181   12440 start.go:254] writing updated cluster config ...
	I0610 11:09:30.059297   12440 out.go:177] 
	I0610 11:09:30.074159   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:09:30.074159   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:09:30.085248   12440 out.go:177] * Starting "ha-368100-m03" control-plane node in "ha-368100" cluster
	I0610 11:09:30.088734   12440 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 11:09:30.088880   12440 cache.go:56] Caching tarball of preloaded images
	I0610 11:09:30.088934   12440 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 11:09:30.088934   12440 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 11:09:30.089484   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:09:30.094469   12440 start.go:360] acquireMachinesLock for ha-368100-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:09:30.095385   12440 start.go:364] duration metric: took 916.8µs to acquireMachinesLock for "ha-368100-m03"
	I0610 11:09:30.095607   12440 start.go:93] Provisioning new machine with config: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:09:30.095660   12440 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0610 11:09:30.096519   12440 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 11:09:30.096519   12440 start.go:159] libmachine.API.Create for "ha-368100" (driver="hyperv")
	I0610 11:09:30.096519   12440 client.go:168] LocalClient.Create starting
	I0610 11:09:30.096519   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 11:09:30.096519   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:09:30.096519   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:09:30.099369   12440 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 11:09:30.099648   12440 main.go:141] libmachine: Decoding PEM data...
	I0610 11:09:30.099648   12440 main.go:141] libmachine: Parsing certificate...
	I0610 11:09:30.099648   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 11:09:32.180342   12440 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 11:09:32.180342   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:32.180342   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 11:09:34.071218   12440 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 11:09:34.071218   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:34.071218   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:09:35.662436   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:09:35.662436   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:35.662436   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:09:39.827039   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:09:39.827424   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:39.829699   12440 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 11:09:40.298086   12440 main.go:141] libmachine: Creating SSH key...
	I0610 11:09:40.684623   12440 main.go:141] libmachine: Creating VM...
	I0610 11:09:40.684751   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 11:09:43.947183   12440 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 11:09:43.947267   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:43.947267   12440 main.go:141] libmachine: Using switch "Default Switch"
	I0610 11:09:43.947454   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 11:09:45.828869   12440 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 11:09:45.829091   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:45.829091   12440 main.go:141] libmachine: Creating VHD
	I0610 11:09:45.829175   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 11:09:49.945116   12440 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 36A15D03-6AD9-4444-AE99-6FBEB781697A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 11:09:49.945116   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:49.945116   12440 main.go:141] libmachine: Writing magic tar header
	I0610 11:09:49.945528   12440 main.go:141] libmachine: Writing SSH key tar header
	I0610 11:09:49.956926   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 11:09:53.341238   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:09:53.341238   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:53.341238   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\disk.vhd' -SizeBytes 20000MB
	I0610 11:09:56.045014   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:09:56.045313   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:09:56.045368   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-368100-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 11:10:00.102297   12440 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-368100-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 11:10:00.102748   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:00.102748   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-368100-m03 -DynamicMemoryEnabled $false
	I0610 11:10:02.581366   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:02.581366   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:02.581636   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-368100-m03 -Count 2
	I0610 11:10:04.996432   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:04.996432   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:04.996515   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-368100-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\boot2docker.iso'
	I0610 11:10:07.844473   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:07.844669   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:07.844733   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-368100-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\disk.vhd'
	I0610 11:10:10.836037   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:10.836811   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:10.836811   12440 main.go:141] libmachine: Starting VM...
	I0610 11:10:10.836811   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-368100-m03
	I0610 11:10:14.134593   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:14.134655   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:14.134655   12440 main.go:141] libmachine: Waiting for host to start...
	I0610 11:10:14.134655   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:16.621463   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:16.621538   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:16.621609   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:19.363799   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:19.363799   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:20.373633   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:22.762313   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:22.762591   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:22.762591   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:25.553872   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:25.553872   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:26.559962   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:28.937268   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:28.937480   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:28.937480   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:31.714054   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:31.714054   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:32.724650   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:35.086562   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:35.086562   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:35.086903   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:37.872500   12440 main.go:141] libmachine: [stdout =====>] : 
	I0610 11:10:37.872500   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:38.877778   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:41.347406   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:41.347406   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:41.347406   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:44.206960   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:10:44.206998   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:44.207121   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:46.533954   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:46.533954   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:46.533954   12440 machine.go:94] provisionDockerMachine start ...
	I0610 11:10:46.534303   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:48.934389   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:48.934523   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:48.934523   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:51.748659   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:10:51.748659   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:51.754561   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:10:51.766191   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:10:51.766191   12440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:10:51.914205   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:10:51.914313   12440 buildroot.go:166] provisioning hostname "ha-368100-m03"
	I0610 11:10:51.914382   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:54.224593   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:54.224661   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:54.224661   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:10:57.001782   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:10:57.002110   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:57.007792   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:10:57.008484   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:10:57.008484   12440 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-368100-m03 && echo "ha-368100-m03" | sudo tee /etc/hostname
	I0610 11:10:57.194149   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-368100-m03
	
	I0610 11:10:57.194149   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:10:59.529733   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:10:59.530426   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:10:59.530426   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:02.315293   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:02.315293   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:02.322600   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:02.323073   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:02.323073   12440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-368100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-368100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-368100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:11:02.490473   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:11:02.490613   12440 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 11:11:02.490613   12440 buildroot.go:174] setting up certificates
	I0610 11:11:02.490673   12440 provision.go:84] configureAuth start
	I0610 11:11:02.490829   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:04.805513   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:04.805513   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:04.805513   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:07.589202   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:07.589396   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:07.589396   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:09.933307   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:09.934302   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:09.934302   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:12.810303   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:12.810303   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:12.810303   12440 provision.go:143] copyHostCerts
	I0610 11:11:12.811337   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 11:11:12.811337   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 11:11:12.811337   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 11:11:12.812336   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 11:11:12.813745   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 11:11:12.814000   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 11:11:12.814000   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 11:11:12.814457   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 11:11:12.815607   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 11:11:12.815830   12440 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 11:11:12.815830   12440 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 11:11:12.816372   12440 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 11:11:12.817281   12440 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-368100-m03 san=[127.0.0.1 172.17.144.162 ha-368100-m03 localhost minikube]
	I0610 11:11:13.318101   12440 provision.go:177] copyRemoteCerts
	I0610 11:11:13.331511   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:11:13.331617   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:15.647377   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:15.647377   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:15.647377   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:18.521673   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:18.522910   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:18.522910   12440 sshutil.go:53] new ssh client: &{IP:172.17.144.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\id_rsa Username:docker}
	I0610 11:11:18.641847   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.3102925s)
	I0610 11:11:18.641847   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 11:11:18.642416   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:11:18.696601   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 11:11:18.697075   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:11:18.758128   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 11:11:18.759272   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 11:11:18.828807   12440 provision.go:87] duration metric: took 16.338s to configureAuth
	I0610 11:11:18.828880   12440 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:11:18.829555   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:11:18.829555   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:21.385335   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:21.386005   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:21.386005   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:24.358797   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:24.359848   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:24.365707   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:24.365707   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:24.365707   12440 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 11:11:24.515166   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 11:11:24.515166   12440 buildroot.go:70] root file system type: tmpfs
	I0610 11:11:24.515542   12440 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 11:11:24.515628   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:26.858873   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:26.858873   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:26.858873   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:29.705890   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:29.705890   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:29.711210   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:29.711259   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:29.711259   12440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.146.64"
	Environment="NO_PROXY=172.17.146.64,172.17.157.100"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 11:11:29.888767   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.146.64
	Environment=NO_PROXY=172.17.146.64,172.17.157.100
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 11:11:29.888860   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:32.274969   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:32.274969   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:32.275971   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:35.140143   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:35.140143   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:35.145372   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:35.146637   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:35.146637   12440 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 11:11:37.402363   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 11:11:37.403353   12440 machine.go:97] duration metric: took 50.8689798s to provisionDockerMachine
	I0610 11:11:37.403353   12440 client.go:171] duration metric: took 2m7.3057803s to LocalClient.Create
	I0610 11:11:37.403353   12440 start.go:167] duration metric: took 2m7.3057803s to libmachine.API.Create "ha-368100"
	I0610 11:11:37.403353   12440 start.go:293] postStartSetup for "ha-368100-m03" (driver="hyperv")
	I0610 11:11:37.403353   12440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:11:37.415362   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:11:37.415362   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:39.810121   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:39.810121   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:39.810121   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:42.635993   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:42.635993   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:42.636340   12440 sshutil.go:53] new ssh client: &{IP:172.17.144.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\id_rsa Username:docker}
	I0610 11:11:42.770715   12440 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.355245s)
	I0610 11:11:42.785243   12440 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:11:42.793020   12440 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:11:42.793020   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 11:11:42.793603   12440 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 11:11:42.794393   12440 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 11:11:42.794393   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 11:11:42.808344   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:11:42.828145   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 11:11:42.877014   12440 start.go:296] duration metric: took 5.4735458s for postStartSetup
	I0610 11:11:42.880148   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:45.228888   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:45.228888   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:45.229590   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:48.077463   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:48.077463   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:48.078586   12440 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\config.json ...
	I0610 11:11:48.081366   12440 start.go:128] duration metric: took 2m17.9845662s to createHost
	I0610 11:11:48.081472   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:50.443719   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:50.443719   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:50.443785   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:53.232078   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:53.232078   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:53.238517   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:53.239145   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:53.239145   12440 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:11:53.383688   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718017913.388182023
	
	I0610 11:11:53.383756   12440 fix.go:216] guest clock: 1718017913.388182023
	I0610 11:11:53.383756   12440 fix.go:229] Guest: 2024-06-10 11:11:53.388182023 +0000 UTC Remote: 2024-06-10 11:11:48.0813667 +0000 UTC m=+591.222235301 (delta=5.306815323s)
	I0610 11:11:53.383835   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:11:55.692516   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:11:55.693383   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:55.693488   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:11:58.472417   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:11:58.472417   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:11:58.477395   12440 main.go:141] libmachine: Using SSH client type: native
	I0610 11:11:58.478142   12440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.162 22 <nil> <nil>}
	I0610 11:11:58.478142   12440 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718017913
	I0610 11:11:58.629174   12440 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 11:11:53 UTC 2024
	
	I0610 11:11:58.629174   12440 fix.go:236] clock set: Mon Jun 10 11:11:53 UTC 2024
	 (err=<nil>)
	I0610 11:11:58.629303   12440 start.go:83] releasing machines lock for "ha-368100-m03", held for 2m28.5325619s
	I0610 11:11:58.629600   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:12:00.958228   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:00.958228   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:00.958890   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:03.702052   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:12:03.702052   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:03.710713   12440 out.go:177] * Found network options:
	I0610 11:12:03.713491   12440 out.go:177]   - NO_PROXY=172.17.146.64,172.17.157.100
	W0610 11:12:03.715785   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:12:03.715870   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 11:12:03.717467   12440 out.go:177]   - NO_PROXY=172.17.146.64,172.17.157.100
	W0610 11:12:03.720499   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:12:03.720557   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:12:03.720837   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 11:12:03.720837   12440 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 11:12:03.723965   12440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:12:03.724139   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:12:03.733980   12440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 11:12:03.733980   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100-m03 ).state
	I0610 11:12:06.094233   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:06.094233   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:06.094233   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:06.108405   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:06.108405   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:06.108405   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:09.088715   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:12:09.089415   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:09.089415   12440 sshutil.go:53] new ssh client: &{IP:172.17.144.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\id_rsa Username:docker}
	I0610 11:12:09.115193   12440 main.go:141] libmachine: [stdout =====>] : 172.17.144.162
	
	I0610 11:12:09.115193   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:09.115937   12440 sshutil.go:53] new ssh client: &{IP:172.17.144.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100-m03\id_rsa Username:docker}
	I0610 11:12:09.196966   12440 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.4626799s)
	W0610 11:12:09.196966   12440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:12:09.209614   12440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:12:09.273391   12440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:12:09.273391   12440 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5493808s)
	I0610 11:12:09.273391   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:12:09.273816   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:12:09.329501   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 11:12:09.370970   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 11:12:09.392972   12440 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 11:12:09.403990   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 11:12:09.442091   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:12:09.478941   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 11:12:09.510983   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 11:12:09.543967   12440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:12:09.577023   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 11:12:09.615821   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 11:12:09.651203   12440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 11:12:09.685581   12440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:12:09.716822   12440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:12:09.750433   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:09.975274   12440 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 11:12:10.009671   12440 start.go:494] detecting cgroup driver to use...
	I0610 11:12:10.022200   12440 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 11:12:10.063253   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:12:10.106501   12440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:12:10.158449   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:12:10.196998   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:12:10.238070   12440 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 11:12:10.309214   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 11:12:10.337069   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:12:10.391068   12440 ssh_runner.go:195] Run: which cri-dockerd
	I0610 11:12:10.411614   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 11:12:10.434562   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 11:12:10.489437   12440 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 11:12:10.721780   12440 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 11:12:10.945755   12440 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 11:12:10.945858   12440 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 11:12:10.997291   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:11.237905   12440 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 11:12:13.793102   12440 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5551761s)
	I0610 11:12:13.805526   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 11:12:13.845720   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:12:13.884409   12440 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 11:12:14.136064   12440 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 11:12:14.352074   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:14.582162   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 11:12:14.627523   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 11:12:14.667868   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:14.881187   12440 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 11:12:15.003486   12440 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 11:12:15.015749   12440 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 11:12:15.024529   12440 start.go:562] Will wait 60s for crictl version
	I0610 11:12:15.037729   12440 ssh_runner.go:195] Run: which crictl
	I0610 11:12:15.057081   12440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:12:15.112655   12440 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 11:12:15.122689   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:12:15.170298   12440 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 11:12:15.210189   12440 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 11:12:15.214176   12440 out.go:177]   - env NO_PROXY=172.17.146.64
	I0610 11:12:15.217176   12440 out.go:177]   - env NO_PROXY=172.17.146.64,172.17.157.100
	I0610 11:12:15.219169   12440 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 11:12:15.223169   12440 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 11:12:15.223169   12440 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 11:12:15.223169   12440 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 11:12:15.223169   12440 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 11:12:15.226260   12440 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 11:12:15.227196   12440 ip.go:210] interface addr: 172.17.144.1/20
	I0610 11:12:15.238175   12440 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 11:12:15.244185   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:12:15.269098   12440 mustload.go:65] Loading cluster: ha-368100
	I0610 11:12:15.270315   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:12:15.271173   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:12:17.556091   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:17.556091   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:17.556220   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:12:17.556833   12440 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100 for IP: 172.17.144.162
	I0610 11:12:17.556833   12440 certs.go:194] generating shared ca certs ...
	I0610 11:12:17.556833   12440 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:12:17.557506   12440 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 11:12:17.557779   12440 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 11:12:17.557779   12440 certs.go:256] generating profile certs ...
	I0610 11:12:17.559083   12440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\client.key
	I0610 11:12:17.559191   12440 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.2e196afa
	I0610 11:12:17.559191   12440 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.2e196afa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.146.64 172.17.157.100 172.17.144.162 172.17.159.254]
	I0610 11:12:17.830488   12440 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.2e196afa ...
	I0610 11:12:17.830488   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.2e196afa: {Name:mk25ca56d579241f53857bc22bf805a9fa61f24e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:12:17.831491   12440 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.2e196afa ...
	I0610 11:12:17.831491   12440 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.2e196afa: {Name:mka09d243d5408e78ceb058be8a57ca5fbce04b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:12:17.832188   12440 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt.2e196afa -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt
	I0610 11:12:17.846369   12440 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key.2e196afa -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key
	I0610 11:12:17.847016   12440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key
	I0610 11:12:17.847016   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 11:12:17.848030   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 11:12:17.848271   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 11:12:17.848371   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 11:12:17.848556   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 11:12:17.848692   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 11:12:17.848939   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 11:12:17.849257   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 11:12:17.849424   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 11:12:17.849931   12440 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 11:12:17.849969   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 11:12:17.850208   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 11:12:17.850208   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 11:12:17.850208   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 11:12:17.850208   12440 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 11:12:17.851176   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 11:12:17.851384   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:12:17.851415   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 11:12:17.851415   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:12:20.197038   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:20.197380   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:20.197474   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:23.136817   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:12:23.137697   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:23.138433   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:12:23.251098   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0610 11:12:23.261572   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0610 11:12:23.304021   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0610 11:12:23.312110   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0610 11:12:23.349942   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0610 11:12:23.358147   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0610 11:12:23.395193   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0610 11:12:23.403052   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0610 11:12:23.438585   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0610 11:12:23.446514   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0610 11:12:23.489915   12440 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0610 11:12:23.498390   12440 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0610 11:12:23.522298   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:12:23.584640   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:12:23.646900   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:12:23.703148   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 11:12:23.755272   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0610 11:12:23.813130   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:12:23.866528   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:12:23.921960   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-368100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:12:23.976111   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 11:12:24.036437   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:12:24.102185   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 11:12:24.158119   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0610 11:12:24.194318   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0610 11:12:24.244108   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0610 11:12:24.282521   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0610 11:12:24.319516   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0610 11:12:24.363765   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0610 11:12:24.402404   12440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0610 11:12:24.451217   12440 ssh_runner.go:195] Run: openssl version
	I0610 11:12:24.476290   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:12:24.514386   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:12:24.525114   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:12:24.539107   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:12:24.572462   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:12:24.615093   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 11:12:24.650240   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 11:12:24.660081   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 11:12:24.671137   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 11:12:24.697949   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 11:12:24.732144   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 11:12:24.771909   12440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 11:12:24.784352   12440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 11:12:24.800410   12440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 11:12:24.825359   12440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:12:24.863145   12440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:12:24.871378   12440 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 11:12:24.871544   12440 kubeadm.go:928] updating node {m03 172.17.144.162 8443 v1.30.1 docker true true} ...
	I0610 11:12:24.871789   12440 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-368100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.144.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:12:24.871908   12440 kube-vip.go:115] generating kube-vip config ...
	I0610 11:12:24.885822   12440 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 11:12:24.921007   12440 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 11:12:24.921254   12440 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 11:12:24.934674   12440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:12:24.953081   12440 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 11:12:24.965824   12440 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 11:12:24.991728   12440 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0610 11:12:24.991728   12440 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0610 11:12:24.991728   12440 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 11:12:24.991728   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:12:24.991728   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:12:25.007703   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:12:25.008559   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:12:25.009060   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:12:25.015252   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 11:12:25.015252   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 11:12:25.079144   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 11:12:25.079250   12440 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:12:25.079250   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 11:12:25.096930   12440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:12:25.151060   12440 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 11:12:25.151060   12440 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 11:12:26.591358   12440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0610 11:12:26.609413   12440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0610 11:12:26.643730   12440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:12:26.676842   12440 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 11:12:26.725172   12440 ssh_runner.go:195] Run: grep 172.17.159.254	control-plane.minikube.internal$ /etc/hosts
	I0610 11:12:26.731960   12440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:12:26.771015   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:12:27.004186   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:12:27.046029   12440 host.go:66] Checking if "ha-368100" exists ...
	I0610 11:12:27.090129   12440 start.go:316] joinCluster: &{Name:ha-368100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-368100 Namespace:default APIServerHAVIP:172.17.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.146.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.157.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.144.162 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:12:27.090129   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 11:12:27.091159   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-368100 ).state
	I0610 11:12:29.407950   12440 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 11:12:29.408346   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:29.408346   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-368100 ).networkadapters[0]).ipaddresses[0]
	I0610 11:12:32.241337   12440 main.go:141] libmachine: [stdout =====>] : 172.17.146.64
	
	I0610 11:12:32.241337   12440 main.go:141] libmachine: [stderr =====>] : 
	I0610 11:12:32.241793   12440 sshutil.go:53] new ssh client: &{IP:172.17.146.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-368100\id_rsa Username:docker}
	I0610 11:12:32.485996   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3957235s)
	I0610 11:12:32.485996   12440 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.144.162 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:12:32.486102   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gohj1v.zzhcqgoek2436t6x --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-368100-m03 --control-plane --apiserver-advertise-address=172.17.144.162 --apiserver-bind-port=8443"
	I0610 11:13:17.864991   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gohj1v.zzhcqgoek2436t6x --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-368100-m03 --control-plane --apiserver-advertise-address=172.17.144.162 --apiserver-bind-port=8443": (45.3785172s)
	I0610 11:13:17.864991   12440 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 11:13:18.865165   12440 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.000165s)
	I0610 11:13:18.882036   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-368100-m03 minikube.k8s.io/updated_at=2024_06_10T11_13_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-368100 minikube.k8s.io/primary=false
	I0610 11:13:19.059048   12440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-368100-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0610 11:13:19.405718   12440 start.go:318] duration metric: took 52.3151597s to joinCluster
	I0610 11:13:19.405718   12440 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.17.144.162 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 11:13:19.410214   12440 out.go:177] * Verifying Kubernetes components...
	I0610 11:13:19.406974   12440 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 11:13:19.430738   12440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:13:19.921308   12440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:13:19.956625   12440 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 11:13:19.957617   12440 kapi.go:59] client config for ha-368100: &rest.Config{Host:"https://172.17.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-368100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0610 11:13:19.957797   12440 kubeadm.go:477] Overriding stale ClientConfig host https://172.17.159.254:8443 with https://172.17.146.64:8443
	I0610 11:13:19.958752   12440 node_ready.go:35] waiting up to 6m0s for node "ha-368100-m03" to be "Ready" ...
	I0610 11:13:19.958932   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:19.958932   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:19.958991   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:19.958991   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:19.974304   12440 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0610 11:13:20.461552   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:20.461552   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:20.461552   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:20.461552   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:20.466612   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:20.967568   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:20.967568   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:20.967818   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:20.967818   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:20.974178   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:21.470747   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:21.470747   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:21.470747   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:21.470747   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:21.475524   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:21.959429   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:21.959582   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:21.959582   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:21.959582   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:21.970086   12440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 11:13:21.970086   12440 node_ready.go:53] node "ha-368100-m03" has status "Ready":"False"
	I0610 11:13:22.466468   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:22.466468   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:22.466468   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:22.466468   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:22.471505   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:22.974548   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:22.974650   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:22.974683   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:22.974683   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:22.979314   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:23.463539   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:23.463539   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:23.463539   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:23.463539   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:23.472712   12440 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:13:23.971089   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:23.971089   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:23.971089   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:23.971089   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:24.258484   12440 round_trippers.go:574] Response Status: 200 OK in 287 milliseconds
	I0610 11:13:24.259145   12440 node_ready.go:53] node "ha-368100-m03" has status "Ready":"False"
	I0610 11:13:24.471303   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:24.471303   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:24.471303   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:24.471303   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:24.476399   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:24.960362   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:24.960552   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:24.960552   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:24.960552   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:24.965584   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:25.469837   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:25.469907   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:25.469907   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:25.469907   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:25.474194   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:25.971869   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:25.971934   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:25.971934   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:25.971934   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:25.978438   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:26.472556   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:26.472618   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:26.472618   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:26.472618   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:26.477581   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:26.478195   12440 node_ready.go:53] node "ha-368100-m03" has status "Ready":"False"
	I0610 11:13:26.959578   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:26.959578   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:26.959578   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:26.959731   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:26.965147   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:27.461681   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:27.461927   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:27.461927   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:27.461927   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:27.469793   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:13:27.962882   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:27.963018   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:27.963083   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:27.963083   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:27.979430   12440 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0610 11:13:28.461696   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:28.461771   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:28.461826   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:28.461826   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:28.467657   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:28.961855   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:28.961855   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:28.961855   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:28.961855   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:28.966439   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:28.967601   12440 node_ready.go:53] node "ha-368100-m03" has status "Ready":"False"
	I0610 11:13:29.460401   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:29.460401   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:29.460401   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:29.460401   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:29.466165   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:29.962951   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:29.962951   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:29.963101   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:29.963101   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:29.967519   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:30.465128   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:30.465358   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.465358   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.465358   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.476127   12440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 11:13:30.477470   12440 node_ready.go:49] node "ha-368100-m03" has status "Ready":"True"
	I0610 11:13:30.477522   12440 node_ready.go:38] duration metric: took 10.5186832s for node "ha-368100-m03" to be "Ready" ...
	I0610 11:13:30.477522   12440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:13:30.477640   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:30.477696   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.477696   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.477696   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.488917   12440 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 11:13:30.500005   12440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.500099   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2jsrh
	I0610 11:13:30.500099   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.500099   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.500099   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.503829   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:13:30.505213   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:30.505213   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.505394   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.505394   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.511536   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:30.512107   12440 pod_ready.go:92] pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:30.512138   12440 pod_ready.go:81] duration metric: took 12.0389ms for pod "coredns-7db6d8ff4d-2jsrh" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.512205   12440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.512300   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dl8r2
	I0610 11:13:30.512338   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.512338   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.512338   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.516425   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:30.518048   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:30.518145   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.518145   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.518145   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.521458   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:13:30.522466   12440 pod_ready.go:92] pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:30.522466   12440 pod_ready.go:81] duration metric: took 10.2607ms for pod "coredns-7db6d8ff4d-dl8r2" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.522466   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.522466   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100
	I0610 11:13:30.522466   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.522466   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.522466   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.528692   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:30.529229   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:30.529229   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.529229   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.529229   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.532904   12440 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:13:30.533976   12440 pod_ready.go:92] pod "etcd-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:30.533976   12440 pod_ready.go:81] duration metric: took 11.5096ms for pod "etcd-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.533976   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.534565   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m02
	I0610 11:13:30.534565   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.534638   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.534638   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.541358   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:30.544187   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:30.544255   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.544255   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.544328   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.556527   12440 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0610 11:13:30.557664   12440 pod_ready.go:92] pod "etcd-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:30.557763   12440 pod_ready.go:81] duration metric: took 23.7865ms for pod "etcd-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.557763   12440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:30.668086   12440 request.go:629] Waited for 110.0843ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:30.668156   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:30.668156   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.668156   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.668337   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.672712   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:30.873594   12440 request.go:629] Waited for 199.688ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:30.873594   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:30.873594   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:30.873594   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:30.873594   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:30.887647   12440 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 11:13:31.078322   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:31.078476   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:31.078476   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:31.078476   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:31.083295   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:31.267720   12440 request.go:629] Waited for 183.1273ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:31.267720   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:31.267720   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:31.267720   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:31.267720   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:31.275303   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:13:31.564591   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:31.564759   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:31.564759   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:31.564759   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:31.572396   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:13:31.675017   12440 request.go:629] Waited for 101.3926ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:31.675122   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:31.675122   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:31.675122   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:31.675122   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:31.680026   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:32.068886   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/etcd-ha-368100-m03
	I0610 11:13:32.068886   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.068998   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.068998   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.075539   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:32.076718   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:32.076718   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.076821   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.076821   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.081000   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:32.081000   12440 pod_ready.go:92] pod "etcd-ha-368100-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:32.081000   12440 pod_ready.go:81] duration metric: took 1.5232246s for pod "etcd-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.081000   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.273706   12440 request.go:629] Waited for 192.4855ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100
	I0610 11:13:32.273811   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100
	I0610 11:13:32.273860   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.273860   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.273860   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.279553   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:32.479921   12440 request.go:629] Waited for 198.7924ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:32.479964   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:32.480147   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.480147   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.480400   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.484735   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:32.485867   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:32.485867   12440 pod_ready.go:81] duration metric: took 404.8635ms for pod "kube-apiserver-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.485867   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.672602   12440 request.go:629] Waited for 186.7336ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:13:32.672835   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m02
	I0610 11:13:32.672835   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.672835   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.672835   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.677676   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:32.874631   12440 request.go:629] Waited for 194.9759ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:32.874859   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:32.874914   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:32.874914   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:32.874914   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:32.880303   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:32.881409   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:32.881485   12440 pod_ready.go:81] duration metric: took 395.6152ms for pod "kube-apiserver-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:32.881485   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.076288   12440 request.go:629] Waited for 194.7192ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m03
	I0610 11:13:33.076288   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-368100-m03
	I0610 11:13:33.076288   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.076288   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.076288   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.081498   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:33.267076   12440 request.go:629] Waited for 185.073ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:33.267514   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:33.267514   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.267514   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.267618   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.276259   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:13:33.277539   12440 pod_ready.go:92] pod "kube-apiserver-ha-368100-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:33.277712   12440 pod_ready.go:81] duration metric: took 396.2238ms for pod "kube-apiserver-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.277712   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.471622   12440 request.go:629] Waited for 193.6777ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100
	I0610 11:13:33.471807   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100
	I0610 11:13:33.471807   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.471807   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.471807   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.482913   12440 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 11:13:33.675070   12440 request.go:629] Waited for 191.344ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:33.675345   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:33.675345   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.675345   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.675345   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.679881   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:33.681539   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:33.681539   12440 pod_ready.go:81] duration metric: took 403.7554ms for pod "kube-controller-manager-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.681593   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:33.880488   12440 request.go:629] Waited for 198.5055ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m02
	I0610 11:13:33.880723   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m02
	I0610 11:13:33.880723   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:33.880723   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:33.880839   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:33.885125   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:34.066585   12440 request.go:629] Waited for 179.5935ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:34.066927   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:34.066927   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.066927   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.066927   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.073083   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:34.074487   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:34.074545   12440 pod_ready.go:81] duration metric: took 392.9494ms for pod "kube-controller-manager-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.074545   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.272954   12440 request.go:629] Waited for 197.9117ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m03
	I0610 11:13:34.272954   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-368100-m03
	I0610 11:13:34.272954   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.272954   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.272954   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.279480   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:34.476789   12440 request.go:629] Waited for 195.9076ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:34.476983   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:34.476983   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.476983   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.476983   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.486699   12440 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:13:34.487715   12440 pod_ready.go:92] pod "kube-controller-manager-ha-368100-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:34.487715   12440 pod_ready.go:81] duration metric: took 413.1666ms for pod "kube-controller-manager-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.487715   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2j65l" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.679156   12440 request.go:629] Waited for 191.2526ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2j65l
	I0610 11:13:34.679304   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2j65l
	I0610 11:13:34.679304   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.679304   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.679304   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.686710   12440 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:13:34.865644   12440 request.go:629] Waited for 176.4181ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:34.865859   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:34.865982   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:34.865982   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:34.866066   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:34.871448   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:34.872913   12440 pod_ready.go:92] pod "kube-proxy-2j65l" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:34.872913   12440 pod_ready.go:81] duration metric: took 385.1945ms for pod "kube-proxy-2j65l" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:34.873122   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2mwxs" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.070541   12440 request.go:629] Waited for 197.358ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mwxs
	I0610 11:13:35.070996   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2mwxs
	I0610 11:13:35.071109   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.071109   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.071109   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.077434   12440 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:13:35.275992   12440 request.go:629] Waited for 197.5952ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:35.276310   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:35.276310   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.276310   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.276310   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.280944   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:35.282512   12440 pod_ready.go:92] pod "kube-proxy-2mwxs" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:35.282657   12440 pod_ready.go:81] duration metric: took 409.4723ms for pod "kube-proxy-2mwxs" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.282728   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pvvwh" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.478410   12440 request.go:629] Waited for 195.4767ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvvwh
	I0610 11:13:35.478725   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvvwh
	I0610 11:13:35.478764   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.478798   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.478798   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.482834   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:35.666079   12440 request.go:629] Waited for 181.6553ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:35.666335   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:35.666335   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.666442   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.666442   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.678282   12440 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 11:13:35.679335   12440 pod_ready.go:92] pod "kube-proxy-pvvwh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:35.679534   12440 pod_ready.go:81] duration metric: took 396.8033ms for pod "kube-proxy-pvvwh" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.679573   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:35.868218   12440 request.go:629] Waited for 188.5657ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100
	I0610 11:13:35.868476   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100
	I0610 11:13:35.868593   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:35.868593   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:35.868593   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:35.877933   12440 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:13:36.069058   12440 request.go:629] Waited for 190.1224ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:36.069483   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100
	I0610 11:13:36.069483   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.069483   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.069483   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.074736   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:36.075764   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:36.075764   12440 pod_ready.go:81] duration metric: took 396.1877ms for pod "kube-scheduler-ha-368100" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.075764   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.273650   12440 request.go:629] Waited for 197.5249ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m02
	I0610 11:13:36.273861   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m02
	I0610 11:13:36.273861   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.273861   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.273861   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.282565   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:13:36.474212   12440 request.go:629] Waited for 190.3924ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:36.474212   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m02
	I0610 11:13:36.474212   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.474212   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.474212   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.480110   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:36.480798   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:36.481348   12440 pod_ready.go:81] duration metric: took 405.5808ms for pod "kube-scheduler-ha-368100-m02" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.481564   12440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.675558   12440 request.go:629] Waited for 193.5616ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m03
	I0610 11:13:36.675737   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-368100-m03
	I0610 11:13:36.675737   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.675737   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.675821   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.680527   12440 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:13:36.879561   12440 request.go:629] Waited for 197.1321ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:36.879795   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes/ha-368100-m03
	I0610 11:13:36.879795   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.879795   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.879795   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.888238   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:13:36.889115   12440 pod_ready.go:92] pod "kube-scheduler-ha-368100-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 11:13:36.889282   12440 pod_ready.go:81] duration metric: took 407.6011ms for pod "kube-scheduler-ha-368100-m03" in "kube-system" namespace to be "Ready" ...
	I0610 11:13:36.889373   12440 pod_ready.go:38] duration metric: took 6.4117288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:13:36.889439   12440 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:13:36.902667   12440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:13:36.932589   12440 api_server.go:72] duration metric: took 17.5264988s to wait for apiserver process to appear ...
	I0610 11:13:36.932589   12440 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:13:36.932589   12440 api_server.go:253] Checking apiserver healthz at https://172.17.146.64:8443/healthz ...
	I0610 11:13:36.940827   12440 api_server.go:279] https://172.17.146.64:8443/healthz returned 200:
	ok
	I0610 11:13:36.941878   12440 round_trippers.go:463] GET https://172.17.146.64:8443/version
	I0610 11:13:36.941958   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:36.941958   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:36.941958   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:36.944356   12440 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:13:36.944873   12440 api_server.go:141] control plane version: v1.30.1
	I0610 11:13:36.944873   12440 api_server.go:131] duration metric: took 12.2846ms to wait for apiserver health ...
	I0610 11:13:36.944873   12440 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:13:37.067966   12440 request.go:629] Waited for 122.7294ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:37.068085   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:37.068085   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:37.068085   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:37.068085   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:37.082170   12440 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 11:13:37.093121   12440 system_pods.go:59] 24 kube-system pods found
	I0610 11:13:37.093121   12440 system_pods.go:61] "coredns-7db6d8ff4d-2jsrh" [eec90043-8c22-4041-a178-266148b8368e] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "coredns-7db6d8ff4d-dl8r2" [39350017-f3e1-44ea-a786-c03ee7a0fd8e] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "etcd-ha-368100" [a8a99351-89b1-4e87-a251-e8735df617cc] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "etcd-ha-368100-m02" [fa26841e-b79d-483a-b723-3654fde31626] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "etcd-ha-368100-m03" [e26b99db-b727-47e4-9aa8-7cd2f1a58454] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kindnet-g66bp" [aeebb510-5026-4062-95d8-be966524f934] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kindnet-n6fxd" [327dd296-b02d-4784-a971-80cee701dee0] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kindnet-qk4fv" [3687f8c4-d986-4023-a2ad-98aa6d4ddd15] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-apiserver-ha-368100" [60620b18-7050-463c-b761-9d89caea2869] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-apiserver-ha-368100-m02" [b0105503-1e6b-4d83-a2ff-c921f7916ceb] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-apiserver-ha-368100-m03" [4d3f6596-2d88-46bc-8ca1-6115e3f60dca] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-controller-manager-ha-368100" [a1e4d3d6-ff46-4f52-b5ff-fdad20389b34] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-controller-manager-ha-368100-m02" [18ffec1a-6bb3-4236-98f4-88e03d83516b] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-controller-manager-ha-368100-m03" [32925a2e-757b-4bbc-8d2d-258212289ae0] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-proxy-2j65l" [dfd9f031-9a9e-46fc-ad2f-b0d61e7d7034] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-proxy-2mwxs" [4ba43598-8c67-43cc-b17a-7d7fbd835edc] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-proxy-pvvwh" [6cc7a9ab-5235-4c3a-8184-be5b4e436320] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-scheduler-ha-368100" [ac6c4d94-e6c2-4e43-b8ea-7819597ff572] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-scheduler-ha-368100-m02" [3d706715-7b39-4a07-ad0d-2e91b0476ac7] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-scheduler-ha-368100-m03" [d0c84f6a-aae9-4c03-9d69-5b2643e0dfc1] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-vip-ha-368100" [fbd9ab1c-c5b6-4b14-b4a7-8da5a58285b4] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-vip-ha-368100-m02" [2e64f8be-5d5f-41ba-b4c8-9f3623e9efc6] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "kube-vip-ha-368100-m03" [0482cc17-ebee-4f8f-a02d-5e39d035f7b4] Running
	I0610 11:13:37.093121   12440 system_pods.go:61] "storage-provisioner" [853aab4d-2671-43fd-a221-0966d875b568] Running
	I0610 11:13:37.093121   12440 system_pods.go:74] duration metric: took 148.2463ms to wait for pod list to return data ...
	I0610 11:13:37.093121   12440 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:13:37.271805   12440 request.go:629] Waited for 178.6825ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/default/serviceaccounts
	I0610 11:13:37.271805   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/default/serviceaccounts
	I0610 11:13:37.271805   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:37.271805   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:37.271805   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:37.276890   12440 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:13:37.277983   12440 default_sa.go:45] found service account: "default"
	I0610 11:13:37.278035   12440 default_sa.go:55] duration metric: took 184.8609ms for default service account to be created ...
	I0610 11:13:37.278035   12440 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:13:37.475811   12440 request.go:629] Waited for 197.534ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:37.476060   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/namespaces/kube-system/pods
	I0610 11:13:37.476060   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:37.476157   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:37.476157   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:37.492717   12440 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0610 11:13:37.504375   12440 system_pods.go:86] 24 kube-system pods found
	I0610 11:13:37.504375   12440 system_pods.go:89] "coredns-7db6d8ff4d-2jsrh" [eec90043-8c22-4041-a178-266148b8368e] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "coredns-7db6d8ff4d-dl8r2" [39350017-f3e1-44ea-a786-c03ee7a0fd8e] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "etcd-ha-368100" [a8a99351-89b1-4e87-a251-e8735df617cc] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "etcd-ha-368100-m02" [fa26841e-b79d-483a-b723-3654fde31626] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "etcd-ha-368100-m03" [e26b99db-b727-47e4-9aa8-7cd2f1a58454] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "kindnet-g66bp" [aeebb510-5026-4062-95d8-be966524f934] Running
	I0610 11:13:37.504375   12440 system_pods.go:89] "kindnet-n6fxd" [327dd296-b02d-4784-a971-80cee701dee0] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kindnet-qk4fv" [3687f8c4-d986-4023-a2ad-98aa6d4ddd15] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-apiserver-ha-368100" [60620b18-7050-463c-b761-9d89caea2869] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-apiserver-ha-368100-m02" [b0105503-1e6b-4d83-a2ff-c921f7916ceb] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-apiserver-ha-368100-m03" [4d3f6596-2d88-46bc-8ca1-6115e3f60dca] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-controller-manager-ha-368100" [a1e4d3d6-ff46-4f52-b5ff-fdad20389b34] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-controller-manager-ha-368100-m02" [18ffec1a-6bb3-4236-98f4-88e03d83516b] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-controller-manager-ha-368100-m03" [32925a2e-757b-4bbc-8d2d-258212289ae0] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-proxy-2j65l" [dfd9f031-9a9e-46fc-ad2f-b0d61e7d7034] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-proxy-2mwxs" [4ba43598-8c67-43cc-b17a-7d7fbd835edc] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-proxy-pvvwh" [6cc7a9ab-5235-4c3a-8184-be5b4e436320] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-scheduler-ha-368100" [ac6c4d94-e6c2-4e43-b8ea-7819597ff572] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-scheduler-ha-368100-m02" [3d706715-7b39-4a07-ad0d-2e91b0476ac7] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-scheduler-ha-368100-m03" [d0c84f6a-aae9-4c03-9d69-5b2643e0dfc1] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-vip-ha-368100" [fbd9ab1c-c5b6-4b14-b4a7-8da5a58285b4] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-vip-ha-368100-m02" [2e64f8be-5d5f-41ba-b4c8-9f3623e9efc6] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "kube-vip-ha-368100-m03" [0482cc17-ebee-4f8f-a02d-5e39d035f7b4] Running
	I0610 11:13:37.504568   12440 system_pods.go:89] "storage-provisioner" [853aab4d-2671-43fd-a221-0966d875b568] Running
	I0610 11:13:37.504568   12440 system_pods.go:126] duration metric: took 226.5318ms to wait for k8s-apps to be running ...
	I0610 11:13:37.504568   12440 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:13:37.518316   12440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:13:37.546906   12440 system_svc.go:56] duration metric: took 42.3372ms WaitForService to wait for kubelet
	I0610 11:13:37.546906   12440 kubeadm.go:576] duration metric: took 18.1410394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:13:37.546906   12440 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:13:37.680502   12440 request.go:629] Waited for 133.595ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.146.64:8443/api/v1/nodes
	I0610 11:13:37.680687   12440 round_trippers.go:463] GET https://172.17.146.64:8443/api/v1/nodes
	I0610 11:13:37.680687   12440 round_trippers.go:469] Request Headers:
	I0610 11:13:37.680687   12440 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:13:37.680687   12440 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 11:13:37.689593   12440 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:13:37.691635   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:13:37.691635   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:13:37.691635   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:13:37.691635   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:13:37.691635   12440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:13:37.691635   12440 node_conditions.go:123] node cpu capacity is 2
	I0610 11:13:37.691635   12440 node_conditions.go:105] duration metric: took 144.7282ms to run NodePressure ...
	I0610 11:13:37.691635   12440 start.go:240] waiting for startup goroutines ...
	I0610 11:13:37.691635   12440 start.go:254] writing updated cluster config ...
	I0610 11:13:37.704674   12440 ssh_runner.go:195] Run: rm -f paused
	I0610 11:13:37.865188   12440 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:13:37.868985   12440 out.go:177] * Done! kubectl is now configured to use "ha-368100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 10 11:05:38 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:05:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/792a1f88c34ef3d0443b9041ca9af3b415a7afe07c8bb4b0d44692ef213163f8/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 11:05:39 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:05:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7e1f56d0d8fcd8b456122b36831f2495c9e29317bbb6cc9b665c88d54331aa7/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 11:05:39 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:05:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5401e0e3d499b0543b4e30bc86ebfa14378c65915eb9df177e04f8d5355633fd/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.288825207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.288939812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.288960413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.289398530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.394610130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.394703134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.394738235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.394849139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.463534816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.463789826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.464000134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:05:39 ha-368100 dockerd[1332]: time="2024-06-10T11:05:39.464238744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:14:19 ha-368100 dockerd[1332]: time="2024-06-10T11:14:19.292214584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:14:19 ha-368100 dockerd[1332]: time="2024-06-10T11:14:19.292457987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:14:19 ha-368100 dockerd[1332]: time="2024-06-10T11:14:19.292546388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:14:19 ha-368100 dockerd[1332]: time="2024-06-10T11:14:19.292850491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:14:19 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:14:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/933e7b7f774c62b84bd1c6980099a49ce8b12d42f25be8182a33603cb751e0a6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 11:14:20 ha-368100 cri-dockerd[1227]: time="2024-06-10T11:14:20Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 10 11:14:21 ha-368100 dockerd[1332]: time="2024-06-10T11:14:21.121737675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:14:21 ha-368100 dockerd[1332]: time="2024-06-10T11:14:21.121964978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:14:21 ha-368100 dockerd[1332]: time="2024-06-10T11:14:21.122021878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:14:21 ha-368100 dockerd[1332]: time="2024-06-10T11:14:21.122180780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	df85a8c280b4e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   933e7b7f774c6       busybox-fc5497c4f-kff2v
	09cd6b70fc20f       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   5401e0e3d499b       coredns-7db6d8ff4d-dl8r2
	223bd98c3c165       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   d7e1f56d0d8fc       storage-provisioner
	efb3b4096e35d       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   792a1f88c34ef       coredns-7db6d8ff4d-2jsrh
	73444aa5980bc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Running             kindnet-cni               0                   ce4274ec4f374       kindnet-qk4fv
	115b8330d5339       747097150317f                                                                                         26 minutes ago      Running             kube-proxy                0                   9832445ddcc98       kube-proxy-2j65l
	56f42c342b96a       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   1916a970b5e71       kube-vip-ha-368100
	b540b6d71db60       a52dc94f0a912                                                                                         27 minutes ago      Running             kube-scheduler            0                   7be8b7f9270b0       kube-scheduler-ha-368100
	d777e3ce95a04       25a1387cdab82                                                                                         27 minutes ago      Running             kube-controller-manager   0                   5fd6688a8e7bb       kube-controller-manager-ha-368100
	fb70745682bca       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   b644d46a1aae9       etcd-ha-368100
	f08944a38cbb0       91be940803172                                                                                         27 minutes ago      Running             kube-apiserver            0                   5839fc372f844       kube-apiserver-ha-368100
	
	
	==> coredns [09cd6b70fc20] <==
	[INFO] 10.244.1.2:59848 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037401267s
	[INFO] 10.244.1.2:47482 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117601s
	[INFO] 10.244.1.2:44195 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214802s
	[INFO] 10.244.0.4:53862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112701s
	[INFO] 10.244.0.4:50783 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076801s
	[INFO] 10.244.0.4:51910 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232602s
	[INFO] 10.244.0.4:37023 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000246702s
	[INFO] 10.244.0.4:47932 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172502s
	[INFO] 10.244.2.2:34531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248003s
	[INFO] 10.244.2.2:33872 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000072201s
	[INFO] 10.244.2.2:59280 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000474005s
	[INFO] 10.244.2.2:59958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065401s
	[INFO] 10.244.0.4:51073 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114501s
	[INFO] 10.244.2.2:49831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000282202s
	[INFO] 10.244.2.2:54890 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080101s
	[INFO] 10.244.2.2:60475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060901s
	[INFO] 10.244.2.2:55509 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062601s
	[INFO] 10.244.1.2:47076 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119601s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000172402s
	[INFO] 10.244.1.2:50519 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000300503s
	[INFO] 10.244.0.4:46515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178602s
	[INFO] 10.244.0.4:47844 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000323503s
	[INFO] 10.244.2.2:36577 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000322703s
	[INFO] 10.244.2.2:39282 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137601s
	[INFO] 10.244.2.2:56688 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000217803s
	
	
	==> coredns [efb3b4096e35] <==
	[INFO] 10.244.0.4:43477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000285403s
	[INFO] 10.244.0.4:39133 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.040194394s
	[INFO] 10.244.2.2:38597 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148801s
	[INFO] 10.244.2.2:32822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000123601s
	[INFO] 10.244.2.2:49451 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000064601s
	[INFO] 10.244.1.2:45373 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.001235812s
	[INFO] 10.244.1.2:34919 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137001s
	[INFO] 10.244.0.4:39606 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012679324s
	[INFO] 10.244.0.4:44144 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138702s
	[INFO] 10.244.0.4:48550 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122201s
	[INFO] 10.244.2.2:35261 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113401s
	[INFO] 10.244.2.2:57747 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024111336s
	[INFO] 10.244.2.2:53428 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139601s
	[INFO] 10.244.2.2:47173 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000302703s
	[INFO] 10.244.1.2:38112 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195702s
	[INFO] 10.244.1.2:43394 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104001s
	[INFO] 10.244.1.2:33777 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114402s
	[INFO] 10.244.1.2:41805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088601s
	[INFO] 10.244.0.4:45442 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251503s
	[INFO] 10.244.0.4:40494 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074601s
	[INFO] 10.244.0.4:49300 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059601s
	[INFO] 10.244.1.2:48668 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122402s
	[INFO] 10.244.0.4:59785 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000239702s
	[INFO] 10.244.0.4:46111 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084601s
	[INFO] 10.244.2.2:60671 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177302s
	
	
	==> describe nodes <==
	Name:               ha-368100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-368100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-368100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T11_05_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:05:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-368100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:32:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:30:00 +0000   Mon, 10 Jun 2024 11:05:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:30:00 +0000   Mon, 10 Jun 2024 11:05:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:30:00 +0000   Mon, 10 Jun 2024 11:05:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:30:00 +0000   Mon, 10 Jun 2024 11:05:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.146.64
	  Hostname:    ha-368100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 106727802fb741c6bff8a0ac9485fce0
	  System UUID:                72c8b920-e217-884f-be80-9e941a2f6edb
	  Boot ID:                    86d99b64-160f-4792-ac83-4a9e72e98c28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kff2v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-2jsrh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-dl8r2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-368100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-qk4fv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-368100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-368100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-2j65l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-368100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-368100                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-368100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-368100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-368100 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-368100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-368100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-368100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node ha-368100 event: Registered Node ha-368100 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-368100 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-368100 event: Registered Node ha-368100 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-368100 event: Registered Node ha-368100 in Controller
	
	
	Name:               ha-368100-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-368100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-368100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T11_09_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:09:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-368100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:32:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:30:04 +0000   Mon, 10 Jun 2024 11:09:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:30:04 +0000   Mon, 10 Jun 2024 11:09:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:30:04 +0000   Mon, 10 Jun 2024 11:09:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:30:04 +0000   Mon, 10 Jun 2024 11:09:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.157.100
	  Hostname:    ha-368100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcd1e300f599473fa217cdf3004cc672
	  System UUID:                0564af0b-b479-c54b-840a-d86e879c7ca4
	  Boot ID:                    44f16c22-bd3d-4bbd-9872-695e8d9773fb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9tfq9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-368100-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-g66bp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-368100-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-368100-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-2mwxs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-368100-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-368100-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node ha-368100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node ha-368100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node ha-368100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-368100-m02 event: Registered Node ha-368100-m02 in Controller
	  Normal  NodeReady                22m                kubelet          Node ha-368100-m02 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-368100-m02 event: Registered Node ha-368100-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-368100-m02 event: Registered Node ha-368100-m02 in Controller
	
	
	Name:               ha-368100-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-368100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-368100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T11_13_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:13:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-368100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:32:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:30:01 +0000   Mon, 10 Jun 2024 11:13:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:30:01 +0000   Mon, 10 Jun 2024 11:13:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:30:01 +0000   Mon, 10 Jun 2024 11:13:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:30:01 +0000   Mon, 10 Jun 2024 11:13:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.144.162
	  Hostname:    ha-368100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f366647d884480da181af8caa28b5d5
	  System UUID:                c1ffacdd-d11a-8444-b3ec-cc3e820687e2
	  Boot ID:                    86b69237-2b2c-449c-8f74-93780012c7ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s49nb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-368100-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-n6fxd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-368100-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-368100-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-pvvwh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-368100-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-368100-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-368100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-368100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-368100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-368100-m03 event: Registered Node ha-368100-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-368100-m03 event: Registered Node ha-368100-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-368100-m03 event: Registered Node ha-368100-m03 in Controller
	
	
	Name:               ha-368100-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-368100-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-368100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T11_18_55_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:18:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-368100-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:32:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:29:37 +0000   Mon, 10 Jun 2024 11:18:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:29:37 +0000   Mon, 10 Jun 2024 11:18:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:29:37 +0000   Mon, 10 Jun 2024 11:18:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:29:37 +0000   Mon, 10 Jun 2024 11:19:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.151.201
	  Hostname:    ha-368100-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e64afee3809d433ebf03e047207b73c9
	  System UUID:                253bb9f9-c0da-de45-978f-7fff9ff076a1
	  Boot ID:                    11b58370-44bb-4d15-9bfe-18c032dcc850
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-clffm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-bkhhw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-368100-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-368100-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-368100-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-368100-m04 event: Registered Node ha-368100-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-368100-m04 event: Registered Node ha-368100-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-368100-m04 event: Registered Node ha-368100-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-368100-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000175] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun10 11:04] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.191196] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +32.178537] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.113388] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.570574] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.202902] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.240161] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +2.856344] systemd-fstab-generator[1180]: Ignoring "noauto" option for root device
	[  +0.210854] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.231849] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.306923] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[ +11.826773] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.119542] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.693035] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[Jun10 11:05] systemd-fstab-generator[1726]: Ignoring "noauto" option for root device
	[  +0.092283] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.711689] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.333377] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[ +17.576528] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.861142] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.227167] kauditd_printk_skb: 14 callbacks suppressed
	[Jun10 11:09] kauditd_printk_skb: 9 callbacks suppressed
	[Jun10 11:12] hrtimer: interrupt took 17869246 ns
	
	
	==> etcd [fb70745682bc] <==
	{"level":"info","ts":"2024-06-10T11:19:06.619572Z","caller":"traceutil/trace.go:171","msg":"trace[131245544] transaction","detail":"{read_only:false; response_revision:2735; number_of_response:1; }","duration":"105.279084ms","start":"2024-06-10T11:19:06.514275Z","end":"2024-06-10T11:19:06.619554Z","steps":["trace[131245544] 'process raft request'  (duration: 104.94048ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:19:06.627026Z","caller":"traceutil/trace.go:171","msg":"trace[1822698725] transaction","detail":"{read_only:false; response_revision:2736; number_of_response:1; }","duration":"112.737653ms","start":"2024-06-10T11:19:06.514274Z","end":"2024-06-10T11:19:06.627011Z","steps":["trace[1822698725] 'process raft request'  (duration: 112.677953ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:19:12.013973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.097533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-06-10T11:19:12.01409Z","caller":"traceutil/trace.go:171","msg":"trace[199376370] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2754; }","duration":"282.250934ms","start":"2024-06-10T11:19:11.731824Z","end":"2024-06-10T11:19:12.014075Z","steps":["trace[199376370] 'range keys from in-memory index tree'  (duration: 280.093014ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:19:12.273402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.984804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-368100-m04\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-06-10T11:19:12.273503Z","caller":"traceutil/trace.go:171","msg":"trace[1646480321] range","detail":"{range_begin:/registry/minions/ha-368100-m04; range_end:; response_count:1; response_revision:2755; }","duration":"129.221305ms","start":"2024-06-10T11:19:12.144256Z","end":"2024-06-10T11:19:12.273478Z","steps":["trace[1646480321] 'range keys from in-memory index tree'  (duration: 127.308287ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:19:13.628835Z","caller":"traceutil/trace.go:171","msg":"trace[1682062009] transaction","detail":"{read_only:false; response_revision:2781; number_of_response:1; }","duration":"101.381746ms","start":"2024-06-10T11:19:13.527438Z","end":"2024-06-10T11:19:13.62882Z","steps":["trace[1682062009] 'process raft request'  (duration: 101.060243ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:19:13.629399Z","caller":"traceutil/trace.go:171","msg":"trace[1460686874] linearizableReadLoop","detail":"{readStateIndex:3319; appliedIndex:3330; }","duration":"107.370702ms","start":"2024-06-10T11:19:13.522019Z","end":"2024-06-10T11:19:13.62939Z","steps":["trace[1460686874] 'read index received'  (duration: 107.366302ms)","trace[1460686874] 'applied index is now lower than readState.Index'  (duration: 3.8µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T11:19:13.629973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.355431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-mzv6w\" ","response":"range_response_count:1 size:4040"}
	{"level":"info","ts":"2024-06-10T11:19:13.63007Z","caller":"traceutil/trace.go:171","msg":"trace[1432068798] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-mzv6w; range_end:; response_count:1; response_revision:2785; }","duration":"153.474232ms","start":"2024-06-10T11:19:13.476586Z","end":"2024-06-10T11:19:13.630061Z","steps":["trace[1432068798] 'agreement among raft nodes before linearized reading'  (duration: 152.949727ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:19:13.63042Z","caller":"traceutil/trace.go:171","msg":"trace[1096788951] transaction","detail":"{read_only:false; response_revision:2780; number_of_response:1; }","duration":"106.344492ms","start":"2024-06-10T11:19:13.52406Z","end":"2024-06-10T11:19:13.630404Z","steps":["trace[1096788951] 'process raft request'  (duration: 104.198072ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:19:13.630994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.950303ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T11:19:13.631046Z","caller":"traceutil/trace.go:171","msg":"trace[1584243065] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2785; }","duration":"129.028504ms","start":"2024-06-10T11:19:13.502008Z","end":"2024-06-10T11:19:13.631036Z","steps":["trace[1584243065] 'agreement among raft nodes before linearized reading'  (duration: 128.915903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:19:13.631675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.598342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-clffm\" ","response":"range_response_count:1 size:4040"}
	{"level":"info","ts":"2024-06-10T11:19:13.631726Z","caller":"traceutil/trace.go:171","msg":"trace[795966588] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-clffm; range_end:; response_count:1; response_revision:2785; }","duration":"154.652143ms","start":"2024-06-10T11:19:13.477066Z","end":"2024-06-10T11:19:13.631718Z","steps":["trace[795966588] 'agreement among raft nodes before linearized reading'  (duration: 154.531841ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:20:03.796012Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1975}
	{"level":"info","ts":"2024-06-10T11:20:03.862314Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1975,"took":"64.015594ms","hash":2541047471,"current-db-size-bytes":3715072,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2813952,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-06-10T11:20:03.862569Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2541047471,"revision":1975,"compact-revision":1066}
	{"level":"info","ts":"2024-06-10T11:25:03.825601Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2940}
	{"level":"info","ts":"2024-06-10T11:25:03.87725Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2940,"took":"50.92556ms","hash":3928884389,"current-db-size-bytes":3715072,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2793472,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-06-10T11:25:03.877302Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3928884389,"revision":2940,"compact-revision":1975}
	{"level":"info","ts":"2024-06-10T11:28:49.485944Z","caller":"traceutil/trace.go:171","msg":"trace[968240815] transaction","detail":"{read_only:false; response_revision:4237; number_of_response:1; }","duration":"118.444755ms","start":"2024-06-10T11:28:49.367479Z","end":"2024-06-10T11:28:49.485924Z","steps":["trace[968240815] 'process raft request'  (duration: 56.1653ms)","trace[968240815] 'compare'  (duration: 62.057453ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T11:30:03.852389Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3683}
	{"level":"info","ts":"2024-06-10T11:30:03.895756Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":3683,"took":"42.237075ms","hash":3759524896,"current-db-size-bytes":3715072,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2064384,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-06-10T11:30:03.895884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3759524896,"revision":3683,"compact-revision":2940}
	
	
	==> kernel <==
	 11:32:19 up 29 min,  0 users,  load average: 0.56, 0.64, 0.51
	Linux ha-368100 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [73444aa5980b] <==
	I0610 11:31:47.362028       1 main.go:250] Node ha-368100-m04 has CIDR [10.244.3.0/24] 
	I0610 11:31:57.376308       1 main.go:223] Handling node with IPs: map[172.17.146.64:{}]
	I0610 11:31:57.376594       1 main.go:227] handling current node
	I0610 11:31:57.376618       1 main.go:223] Handling node with IPs: map[172.17.157.100:{}]
	I0610 11:31:57.376635       1 main.go:250] Node ha-368100-m02 has CIDR [10.244.1.0/24] 
	I0610 11:31:57.376915       1 main.go:223] Handling node with IPs: map[172.17.144.162:{}]
	I0610 11:31:57.376932       1 main.go:250] Node ha-368100-m03 has CIDR [10.244.2.0/24] 
	I0610 11:31:57.377129       1 main.go:223] Handling node with IPs: map[172.17.151.201:{}]
	I0610 11:31:57.377207       1 main.go:250] Node ha-368100-m04 has CIDR [10.244.3.0/24] 
	I0610 11:32:07.394138       1 main.go:223] Handling node with IPs: map[172.17.146.64:{}]
	I0610 11:32:07.394223       1 main.go:227] handling current node
	I0610 11:32:07.394239       1 main.go:223] Handling node with IPs: map[172.17.157.100:{}]
	I0610 11:32:07.394247       1 main.go:250] Node ha-368100-m02 has CIDR [10.244.1.0/24] 
	I0610 11:32:07.394442       1 main.go:223] Handling node with IPs: map[172.17.144.162:{}]
	I0610 11:32:07.394505       1 main.go:250] Node ha-368100-m03 has CIDR [10.244.2.0/24] 
	I0610 11:32:07.394627       1 main.go:223] Handling node with IPs: map[172.17.151.201:{}]
	I0610 11:32:07.394723       1 main.go:250] Node ha-368100-m04 has CIDR [10.244.3.0/24] 
	I0610 11:32:17.408530       1 main.go:223] Handling node with IPs: map[172.17.146.64:{}]
	I0610 11:32:17.408587       1 main.go:227] handling current node
	I0610 11:32:17.408603       1 main.go:223] Handling node with IPs: map[172.17.157.100:{}]
	I0610 11:32:17.408892       1 main.go:250] Node ha-368100-m02 has CIDR [10.244.1.0/24] 
	I0610 11:32:17.409906       1 main.go:223] Handling node with IPs: map[172.17.144.162:{}]
	I0610 11:32:17.410122       1 main.go:250] Node ha-368100-m03 has CIDR [10.244.2.0/24] 
	I0610 11:32:17.410210       1 main.go:223] Handling node with IPs: map[172.17.151.201:{}]
	I0610 11:32:17.410241       1 main.go:250] Node ha-368100-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f08944a38cbb] <==
	E0610 11:13:11.401653       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0610 11:13:11.410924       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0610 11:13:11.411659       1 timeout.go:142] post-timeout activity - time-elapsed: 22.994289ms, PATCH "/api/v1/namespaces/default/events/ha-368100-m03.17d7a04286a8c112" result: <nil>
	E0610 11:14:24.513781       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61725: use of closed network connection
	E0610 11:14:26.221526       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61727: use of closed network connection
	E0610 11:14:26.795278       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61729: use of closed network connection
	E0610 11:14:27.446805       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61731: use of closed network connection
	E0610 11:14:28.022660       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61733: use of closed network connection
	E0610 11:14:28.630613       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61735: use of closed network connection
	E0610 11:14:29.191488       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61737: use of closed network connection
	E0610 11:14:29.763799       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61739: use of closed network connection
	E0610 11:14:30.322607       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61741: use of closed network connection
	E0610 11:14:31.367571       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61744: use of closed network connection
	E0610 11:14:41.912609       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61746: use of closed network connection
	E0610 11:14:42.481822       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61749: use of closed network connection
	E0610 11:14:53.050075       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61751: use of closed network connection
	E0610 11:14:53.594397       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61754: use of closed network connection
	E0610 11:15:04.145164       1 conn.go:339] Error on socket receive: read tcp 172.17.159.254:8443->172.17.144.1:61756: use of closed network connection
	I0610 11:18:59.596764       1 trace.go:236] Trace[1256538062]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.17.146.64,type:*v1.Endpoints,resource:apiServerIPInfo (10-Jun-2024 11:18:59.089) (total time: 506ms):
	Trace[1256538062]: ---"Transaction prepared" 207ms (11:18:59.316)
	Trace[1256538062]: ---"Txn call completed" 278ms (11:18:59.595)
	Trace[1256538062]: [506.181631ms] [506.181631ms] END
	I0610 11:18:59.904011       1 trace.go:236] Trace[1507961070]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:78c480fb-4c4b-432c-ae85-5ac0ba47f7d6,client:172.17.151.201,api-group:,api-version:v1,name:kindnet-pwxrz,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kindnet-pwxrz,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:DELETE (10-Jun-2024 11:18:59.121) (total time: 782ms):
	Trace[1507961070]: ---"Object deleted from database" 305ms (11:18:59.903)
	Trace[1507961070]: [782.213312ms] [782.213312ms] END
	
	
	==> kube-controller-manager [d777e3ce95a0] <==
	I0610 11:09:09.906077       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-368100-m02" podCIDRs=["10.244.1.0/24"]
	I0610 11:09:14.090995       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-368100-m02"
	I0610 11:13:10.561923       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-368100-m03\" does not exist"
	I0610 11:13:10.615274       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-368100-m03" podCIDRs=["10.244.2.0/24"]
	I0610 11:13:14.259195       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-368100-m03"
	I0610 11:14:18.367411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="235.433656ms"
	I0610 11:14:18.653039       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="283.895161ms"
	I0610 11:14:18.829591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="176.462341ms"
	I0610 11:14:18.869124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.35541ms"
	I0610 11:14:18.869436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="131.602µs"
	I0610 11:14:19.037730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.735535ms"
	I0610 11:14:19.038609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.601µs"
	I0610 11:14:19.870150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="198.702µs"
	I0610 11:14:20.257904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.7µs"
	I0610 11:14:21.694738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="123.833615ms"
	E0610 11:14:21.694853       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0610 11:14:21.695207       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="306.403µs"
	I0610 11:14:21.700984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.4µs"
	I0610 11:14:21.745901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.413789ms"
	I0610 11:14:21.747423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.2µs"
	E0610 11:18:54.777445       1 certificate_controller.go:146] Sync csr-t4l29 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-t4l29": the object has been modified; please apply your changes to the latest version and try again
	I0610 11:18:54.863705       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-368100-m04\" does not exist"
	I0610 11:18:54.882793       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-368100-m04" podCIDRs=["10.244.3.0/24"]
	I0610 11:18:59.617162       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-368100-m04"
	I0610 11:19:17.727994       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-368100-m04"
	
	
	==> kube-proxy [115b8330d533] <==
	I0610 11:05:27.986657       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:05:28.031278       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.146.64"]
	I0610 11:05:28.111180       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:05:28.111377       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:05:28.111412       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:05:28.115481       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:05:28.116168       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:05:28.116823       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:05:28.120557       1 config.go:192] "Starting service config controller"
	I0610 11:05:28.121679       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:05:28.121788       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:05:28.121930       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:05:28.126749       1 config.go:319] "Starting node config controller"
	I0610 11:05:28.127161       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:05:28.222294       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:05:28.222375       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:05:28.228406       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b540b6d71db6] <==
	E0610 11:05:07.921274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 11:05:07.925666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 11:05:07.926087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 11:05:07.986822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 11:05:07.987136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 11:05:08.137373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 11:05:08.137484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 11:05:08.139816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 11:05:08.139869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 11:05:08.149386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 11:05:08.150025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 11:05:09.628726       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 11:14:18.236942       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="e47805c5-7a7b-4b89-9d16-10d91abbec83" pod="default/busybox-fc5497c4f-9tfq9" assumedNode="ha-368100-m02" currentNode="ha-368100-m03"
	E0610 11:14:18.282383       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-9tfq9\": pod busybox-fc5497c4f-9tfq9 is already assigned to node \"ha-368100-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-9tfq9" node="ha-368100-m03"
	E0610 11:14:18.286287       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e47805c5-7a7b-4b89-9d16-10d91abbec83(default/busybox-fc5497c4f-9tfq9) was assumed on ha-368100-m03 but assigned to ha-368100-m02" pod="default/busybox-fc5497c4f-9tfq9"
	E0610 11:14:18.287934       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-9tfq9\": pod busybox-fc5497c4f-9tfq9 is already assigned to node \"ha-368100-m02\"" pod="default/busybox-fc5497c4f-9tfq9"
	I0610 11:14:18.288192       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-9tfq9" node="ha-368100-m02"
	E0610 11:14:18.378597       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s49nb\": pod busybox-fc5497c4f-s49nb is already assigned to node \"ha-368100-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-s49nb" node="ha-368100-m03"
	E0610 11:14:18.378674       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4ad912e0-e757-4368-99a7-6687d9687526(default/busybox-fc5497c4f-s49nb) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-s49nb"
	E0610 11:14:18.379366       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s49nb\": pod busybox-fc5497c4f-s49nb is already assigned to node \"ha-368100-m03\"" pod="default/busybox-fc5497c4f-s49nb"
	I0610 11:14:18.379392       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-s49nb" node="ha-368100-m03"
	E0610 11:14:18.419494       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kff2v\": pod busybox-fc5497c4f-kff2v is already assigned to node \"ha-368100\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kff2v" node="ha-368100"
	E0610 11:14:18.420731       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod beea6f51-8d7f-45a8-a021-48301c4e9268(default/busybox-fc5497c4f-kff2v) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kff2v"
	E0610 11:14:18.422553       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kff2v\": pod busybox-fc5497c4f-kff2v is already assigned to node \"ha-368100\"" pod="default/busybox-fc5497c4f-kff2v"
	I0610 11:14:18.422680       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kff2v" node="ha-368100"
	
	
	==> kubelet <==
	Jun 10 11:28:10 ha-368100 kubelet[2217]: E0610 11:28:10.242880    2217 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:28:10 ha-368100 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:28:10 ha-368100 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:28:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:28:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:29:10 ha-368100 kubelet[2217]: E0610 11:29:10.248280    2217 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:29:10 ha-368100 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:29:10 ha-368100 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:29:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:29:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:30:10 ha-368100 kubelet[2217]: E0610 11:30:10.243198    2217 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:30:10 ha-368100 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:30:10 ha-368100 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:30:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:30:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:31:10 ha-368100 kubelet[2217]: E0610 11:31:10.245524    2217 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:31:10 ha-368100 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:31:10 ha-368100 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:31:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:31:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:32:10 ha-368100 kubelet[2217]: E0610 11:32:10.243874    2217 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:32:10 ha-368100 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:32:10 ha-368100 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:32:10 ha-368100 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:32:10 ha-368100 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 11:32:10.458234   11132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-368100 -n ha-368100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-368100 -n ha-368100: (13.4843855s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-368100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (707.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (59.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-czxmt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-czxmt -- sh -c "ping -c 1 172.17.144.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-czxmt -- sh -c "ping -c 1 172.17.144.1": exit status 1 (10.5454872s)

                                                
                                                
-- stdout --
	PING 172.17.144.1 (172.17.144.1): 56 data bytes
	
	--- 172.17.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:12:29.903380    9464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.144.1) from pod (busybox-fc5497c4f-czxmt): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-z28tq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-z28tq -- sh -c "ping -c 1 172.17.144.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-z28tq -- sh -c "ping -c 1 172.17.144.1": exit status 1 (10.5207498s)

                                                
                                                
-- stdout --
	PING 172.17.144.1 (172.17.144.1): 56 data bytes
	
	--- 172.17.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:12:40.940895    1988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.144.1) from pod (busybox-fc5497c4f-z28tq): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-813300 -n multinode-813300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-813300 -n multinode-813300: (13.0404522s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-813300 logs -n 25: (9.2081152s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-314000 ssh -- ls                    | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:00 UTC | 10 Jun 24 12:00 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-314000                           | mount-start-1-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:00 UTC | 10 Jun 24 12:01 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-314000 ssh -- ls                    | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:01 UTC | 10 Jun 24 12:01 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:01 UTC | 10 Jun 24 12:01 UTC |
	| start   | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:01 UTC | 10 Jun 24 12:03 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:03 UTC |                     |
	|         | --profile mount-start-2-314000 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-314000 ssh -- ls                    | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:03 UTC | 10 Jun 24 12:04 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:04 UTC |
	| delete  | -p mount-start-1-314000                           | mount-start-1-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:04 UTC |
	| start   | -p multinode-813300                               | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:11 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- apply -f                   | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- rollout                    | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC |                     |
	|         | busybox-fc5497c4f-czxmt -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC |                     |
	|         | busybox-fc5497c4f-z28tq -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 12:04:43
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 12:04:43.867977    4588 out.go:291] Setting OutFile to fd 712 ...
	I0610 12:04:43.868768    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:04:43.868768    4588 out.go:304] Setting ErrFile to fd 776...
	I0610 12:04:43.868768    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:04:43.892667    4588 out.go:298] Setting JSON to false
	I0610 12:04:43.895275    4588 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20972,"bootTime":1718000111,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 12:04:43.895275    4588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 12:04:43.900472    4588 out.go:177] * [multinode-813300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 12:04:43.904368    4588 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:04:43.904368    4588 notify.go:220] Checking for updates...
	I0610 12:04:43.909526    4588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:04:43.912565    4588 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 12:04:43.917533    4588 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:04:43.919941    4588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:04:43.923788    4588 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:04:43.924271    4588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:04:49.675599    4588 out.go:177] * Using the hyperv driver based on user configuration
	I0610 12:04:49.679131    4588 start.go:297] selected driver: hyperv
	I0610 12:04:49.679287    4588 start.go:901] validating driver "hyperv" against <nil>
	I0610 12:04:49.679287    4588 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:04:49.728962    4588 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 12:04:49.730655    4588 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:04:49.730655    4588 cni.go:84] Creating CNI manager for ""
	I0610 12:04:49.730655    4588 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 12:04:49.730655    4588 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 12:04:49.730655    4588 start.go:340] cluster config:
	{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:04:49.730655    4588 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:04:49.735782    4588 out.go:177] * Starting "multinode-813300" primary control-plane node in "multinode-813300" cluster
	I0610 12:04:49.737542    4588 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:04:49.738389    4588 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 12:04:49.738389    4588 cache.go:56] Caching tarball of preloaded images
	I0610 12:04:49.738521    4588 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:04:49.738973    4588 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:04:49.739157    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:04:49.739400    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json: {Name:mke1756b0f63dd0c0eff0216ad43e7c3fc903678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:04:49.740675    4588 start.go:360] acquireMachinesLock for multinode-813300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:04:49.740675    4588 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300"
	I0610 12:04:49.740675    4588 start.go:93] Provisioning new machine with config: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 12:04:49.740675    4588 start.go:125] createHost starting for "" (driver="hyperv")
	I0610 12:04:49.742990    4588 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 12:04:49.744068    4588 start.go:159] libmachine.API.Create for "multinode-813300" (driver="hyperv")
	I0610 12:04:49.744068    4588 client.go:168] LocalClient.Create starting
	I0610 12:04:49.744355    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 12:04:49.744355    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:04:49.745001    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:04:49.745251    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 12:04:49.745288    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:04:49.745537    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:04:49.745648    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 12:04:51.938878    4588 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 12:04:51.938878    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:51.939553    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 12:04:53.807457    4588 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 12:04:53.807457    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:53.808222    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:04:55.393412    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:04:55.393412    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:55.393412    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:04:59.273212    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:04:59.274143    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:59.276499    4588 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:04:59.786597    4588 main.go:141] libmachine: Creating SSH key...
	I0610 12:05:00.178242    4588 main.go:141] libmachine: Creating VM...
	I0610 12:05:00.178340    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:05:03.335727    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:05:03.335727    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:03.336442    4588 main.go:141] libmachine: Using switch "Default Switch"
	I0610 12:05:03.336442    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:05:05.206486    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:05:05.206839    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:05.206839    4588 main.go:141] libmachine: Creating VHD
	I0610 12:05:05.206938    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 12:05:09.220962    4588 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D79874B4-719D-480C-BEAA-32F87CD7D741
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 12:05:09.221783    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:09.221783    4588 main.go:141] libmachine: Writing magic tar header
	I0610 12:05:09.221873    4588 main.go:141] libmachine: Writing SSH key tar header
	I0610 12:05:09.231477    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 12:05:12.585103    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:12.585103    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:12.586033    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\disk.vhd' -SizeBytes 20000MB
	I0610 12:05:15.285675    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:15.285675    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:15.285962    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-813300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 12:05:19.111640    4588 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-813300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 12:05:19.111640    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:19.112222    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-813300 -DynamicMemoryEnabled $false
	I0610 12:05:21.531378    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:21.531378    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:21.531378    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-813300 -Count 2
	I0610 12:05:23.889725    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:23.889725    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:23.890596    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-813300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\boot2docker.iso'
	I0610 12:05:26.621094    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:26.621720    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:26.621781    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-813300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\disk.vhd'
	I0610 12:05:29.472370    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:29.472370    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:29.472370    4588 main.go:141] libmachine: Starting VM...
	I0610 12:05:29.473255    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300
	I0610 12:05:32.754805    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:32.754805    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:32.754805    4588 main.go:141] libmachine: Waiting for host to start...
	I0610 12:05:32.754805    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:35.217643    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:35.218086    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:35.218212    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:37.944028    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:37.944028    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:38.950550    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:41.379344    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:41.379344    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:41.380252    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:44.115382    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:44.115382    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:45.121347    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:47.512650    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:47.512650    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:47.513336    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:50.281297    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:50.281297    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:51.289490    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:53.673938    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:53.674570    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:53.674570    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:56.397148    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:56.398100    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:57.399811    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:59.797095    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:59.797152    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:59.797152    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:02.530578    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:02.530578    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:02.530897    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:04.770192    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:04.770234    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:04.770296    4588 machine.go:94] provisionDockerMachine start ...
	I0610 12:06:04.770296    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:07.058629    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:07.058629    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:07.059046    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:09.847341    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:09.848100    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:09.853806    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:09.864878    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:09.864878    4588 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:06:09.992682    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:06:09.992682    4588 buildroot.go:166] provisioning hostname "multinode-813300"
	I0610 12:06:09.992830    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:12.311800    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:12.311800    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:12.312418    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:15.048157    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:15.048157    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:15.055378    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:15.055541    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:15.055541    4588 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300 && echo "multinode-813300" | sudo tee /etc/hostname
	I0610 12:06:15.227442    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300
	
	I0610 12:06:15.227442    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:17.470385    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:17.470385    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:17.470748    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:20.178259    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:20.178259    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:20.185354    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:20.185738    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:20.185872    4588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:06:20.340364    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:06:20.340364    4588 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:06:20.340507    4588 buildroot.go:174] setting up certificates
	I0610 12:06:20.340593    4588 provision.go:84] configureAuth start
	I0610 12:06:20.340593    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:22.647449    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:22.647770    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:22.647870    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:25.365433    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:25.366134    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:25.366227    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:27.676201    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:27.677237    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:27.677302    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:30.462238    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:30.462450    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:30.462450    4588 provision.go:143] copyHostCerts
	I0610 12:06:30.462450    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:06:30.463207    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:06:30.463207    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:06:30.463939    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:06:30.464777    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:06:30.465582    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:06:30.465582    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:06:30.465582    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:06:30.466886    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:06:30.466886    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:06:30.466886    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:06:30.467429    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:06:30.467908    4588 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300 san=[127.0.0.1 172.17.159.171 localhost minikube multinode-813300]
	I0610 12:06:30.880090    4588 provision.go:177] copyRemoteCerts
	I0610 12:06:30.893142    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:06:30.893241    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:33.157947    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:33.158648    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:33.158648    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:35.872452    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:35.873367    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:35.873367    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:06:35.983936    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0907517s)
	I0610 12:06:35.984059    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:06:35.984539    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:06:36.037427    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:06:36.037713    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 12:06:36.087322    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:06:36.087855    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 12:06:36.138563    4588 provision.go:87] duration metric: took 15.7977809s to configureAuth
	I0610 12:06:36.138653    4588 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:06:36.138819    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:06:36.138819    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:38.411440    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:38.411440    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:38.411440    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:41.131406    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:41.131406    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:41.138066    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:41.138428    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:41.138428    4588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:06:41.270867    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:06:41.270942    4588 buildroot.go:70] root file system type: tmpfs
	I0610 12:06:41.271213    4588 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:06:41.271282    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:43.585535    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:43.585535    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:43.585535    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:46.334256    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:46.334341    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:46.340258    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:46.340937    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:46.340937    4588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:06:46.504832    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:06:46.505009    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:48.805219    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:48.806280    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:48.806423    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:51.509193    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:51.509586    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:51.514228    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:51.514228    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:51.514228    4588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:06:53.697279    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:06:53.697853    4588 machine.go:97] duration metric: took 48.9265831s to provisionDockerMachine
	I0610 12:06:53.697853    4588 client.go:171] duration metric: took 2m3.9527697s to LocalClient.Create
	I0610 12:06:53.698031    4588 start.go:167] duration metric: took 2m3.9529368s to libmachine.API.Create "multinode-813300"
	I0610 12:06:53.698085    4588 start.go:293] postStartSetup for "multinode-813300" (driver="hyperv")
	I0610 12:06:53.698115    4588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:06:53.710436    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:06:53.710436    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:55.966771    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:55.966771    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:55.966771    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:58.718421    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:58.718421    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:58.719167    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:06:58.827171    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1166938s)
	I0610 12:06:58.839755    4588 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:06:58.846848    4588 command_runner.go:130] > NAME=Buildroot
	I0610 12:06:58.846848    4588 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:06:58.846848    4588 command_runner.go:130] > ID=buildroot
	I0610 12:06:58.846848    4588 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:06:58.846848    4588 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:06:58.847038    4588 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:06:58.847038    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:06:58.847652    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:06:58.848877    4588 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:06:58.848877    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:06:58.861906    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:06:58.883111    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:06:58.930581    4588 start.go:296] duration metric: took 5.2324233s for postStartSetup
	I0610 12:06:58.932577    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:01.213042    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:01.214102    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:01.214102    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:03.953887    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:03.954621    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:03.954896    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:07:03.957997    4588 start.go:128] duration metric: took 2m14.216153s to createHost
	I0610 12:07:03.957997    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:06.232653    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:06.232653    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:06.232653    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:08.922879    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:08.922879    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:08.928691    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:07:08.928691    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:07:08.928691    4588 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 12:07:09.066125    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718021229.075627913
	
	I0610 12:07:09.066125    4588 fix.go:216] guest clock: 1718021229.075627913
	I0610 12:07:09.066125    4588 fix.go:229] Guest: 2024-06-10 12:07:09.075627913 +0000 UTC Remote: 2024-06-10 12:07:03.9579973 +0000 UTC m=+140.257965001 (delta=5.117630613s)
	I0610 12:07:09.066240    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:11.379014    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:11.379014    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:11.379357    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:14.163833    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:14.163833    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:14.170036    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:07:14.170200    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:07:14.170200    4588 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718021229
	I0610 12:07:14.308564    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:07:09 UTC 2024
	
	I0610 12:07:14.308564    4588 fix.go:236] clock set: Mon Jun 10 12:07:09 UTC 2024
	 (err=<nil>)
	I0610 12:07:14.308564    4588 start.go:83] releasing machines lock for "multinode-813300", held for 2m24.5667064s
	I0610 12:07:14.308728    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:16.583361    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:16.583361    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:16.583361    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:19.333520    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:19.334493    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:19.338942    4588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:07:19.339115    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:19.349878    4588 ssh_runner.go:195] Run: cat /version.json
	I0610 12:07:19.349878    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:21.705493    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:21.705493    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:21.705493    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:21.736050    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:21.736147    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:21.736191    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:24.564607    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:24.564844    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:24.564844    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:07:24.595261    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:24.595261    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:24.596193    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:07:24.730348    4588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:07:24.730492    4588 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3915062s)
	I0610 12:07:24.730492    4588 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 12:07:24.730492    4588 ssh_runner.go:235] Completed: cat /version.json: (5.3805704s)
	I0610 12:07:24.743901    4588 ssh_runner.go:195] Run: systemctl --version
	I0610 12:07:24.755276    4588 command_runner.go:130] > systemd 252 (252)
	I0610 12:07:24.755521    4588 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 12:07:24.768011    4588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:07:24.776306    4588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 12:07:24.777113    4588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:07:24.788496    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:07:24.821922    4588 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:07:24.822097    4588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:07:24.822097    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:07:24.822097    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:07:24.858836    4588 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:07:24.870754    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:07:24.906067    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:07:24.927089    4588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:07:24.939539    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:07:24.975868    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:07:25.012044    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:07:25.051040    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:07:25.093321    4588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:07:25.128698    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:07:25.161844    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:07:25.194094    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:07:25.228546    4588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:07:25.253020    4588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:07:25.266396    4588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:07:25.300773    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:25.529366    4588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:07:25.568641    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:07:25.581890    4588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:07:25.609889    4588 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:07:25.610189    4588 command_runner.go:130] > [Unit]
	I0610 12:07:25.610189    4588 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:07:25.610189    4588 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:07:25.610189    4588 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:07:25.610264    4588 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:07:25.610264    4588 command_runner.go:130] > StartLimitBurst=3
	I0610 12:07:25.610264    4588 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:07:25.610264    4588 command_runner.go:130] > [Service]
	I0610 12:07:25.610323    4588 command_runner.go:130] > Type=notify
	I0610 12:07:25.610323    4588 command_runner.go:130] > Restart=on-failure
	I0610 12:07:25.610323    4588 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:07:25.610381    4588 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:07:25.610381    4588 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:07:25.610381    4588 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:07:25.610460    4588 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:07:25.610460    4588 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:07:25.610460    4588 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:07:25.610541    4588 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:07:25.610541    4588 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:07:25.610541    4588 command_runner.go:130] > ExecStart=
	I0610 12:07:25.610541    4588 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:07:25.610727    4588 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:07:25.610787    4588 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:07:25.610787    4588 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:07:25.610787    4588 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > LimitCORE=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:07:25.610845    4588 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:07:25.610845    4588 command_runner.go:130] > TasksMax=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:07:25.610922    4588 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:07:25.610922    4588 command_runner.go:130] > Delegate=yes
	I0610 12:07:25.610922    4588 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:07:25.610922    4588 command_runner.go:130] > KillMode=process
	I0610 12:07:25.610978    4588 command_runner.go:130] > [Install]
	I0610 12:07:25.610978    4588 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:07:25.624039    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:07:25.661400    4588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:07:25.720292    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:07:25.757987    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:07:25.796201    4588 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:07:25.863195    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:07:25.889245    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:07:25.926689    4588 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:07:25.939863    4588 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:07:25.945195    4588 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:07:25.958144    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:07:25.974980    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:07:26.023598    4588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:07:26.238985    4588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:07:26.451509    4588 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:07:26.451626    4588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:07:26.501126    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:26.701662    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:07:29.249741    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5480592s)
	I0610 12:07:29.262915    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:07:29.301406    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:07:29.341268    4588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:07:29.568906    4588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:07:29.785481    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:29.992495    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:07:30.037215    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:07:30.085524    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:30.300979    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:07:30.418219    4588 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:07:30.432434    4588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:07:30.441630    4588 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:07:30.441768    4588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:07:30.441768    4588 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0610 12:07:30.441768    4588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:07:30.441768    4588 command_runner.go:130] > Access: 2024-06-10 12:07:30.340771420 +0000
	I0610 12:07:30.441768    4588 command_runner.go:130] > Modify: 2024-06-10 12:07:30.340771420 +0000
	I0610 12:07:30.441768    4588 command_runner.go:130] > Change: 2024-06-10 12:07:30.344771436 +0000
	I0610 12:07:30.441768    4588 command_runner.go:130] >  Birth: -
	I0610 12:07:30.441768    4588 start.go:562] Will wait 60s for crictl version
	I0610 12:07:30.453463    4588 ssh_runner.go:195] Run: which crictl
	I0610 12:07:30.460096    4588 command_runner.go:130] > /usr/bin/crictl
	I0610 12:07:30.473201    4588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:07:30.530265    4588 command_runner.go:130] > Version:  0.1.0
	I0610 12:07:30.530298    4588 command_runner.go:130] > RuntimeName:  docker
	I0610 12:07:30.530298    4588 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:07:30.530298    4588 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:07:30.530453    4588 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:07:30.541045    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:07:30.577679    4588 command_runner.go:130] > 26.1.4
	I0610 12:07:30.586938    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:07:30.617216    4588 command_runner.go:130] > 26.1.4
	I0610 12:07:30.622417    4588 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:07:30.622417    4588 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:07:30.629450    4588 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:07:30.629450    4588 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:07:30.643235    4588 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:07:30.649840    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:07:30.670389    4588 kubeadm.go:877] updating cluster {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 12:07:30.670389    4588 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:07:30.679574    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:07:30.702356    4588 docker.go:685] Got preloaded images: 
	I0610 12:07:30.702356    4588 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0610 12:07:30.713877    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 12:07:30.734201    4588 command_runner.go:139] > {"Repositories":{}}
	I0610 12:07:30.745928    4588 ssh_runner.go:195] Run: which lz4
	I0610 12:07:30.752458    4588 command_runner.go:130] > /usr/bin/lz4
	I0610 12:07:30.752458    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 12:07:30.763475    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 12:07:30.769540    4588 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 12:07:30.770227    4588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 12:07:30.770389    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0610 12:07:32.729738    4588 docker.go:649] duration metric: took 1.9762697s to copy over tarball
	I0610 12:07:32.743906    4588 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 12:07:41.714684    4588 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9705398s)
	I0610 12:07:41.714777    4588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 12:07:41.787089    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 12:07:41.807203    4588 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0610 12:07:41.807257    4588 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0610 12:07:41.859157    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:42.090821    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:07:44.907266    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8158182s)
	I0610 12:07:44.919479    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 12:07:44.944175    4588 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:07:44.946511    4588 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 12:07:44.946557    4588 cache_images.go:84] Images are preloaded, skipping loading
	I0610 12:07:44.946658    4588 kubeadm.go:928] updating node { 172.17.159.171 8443 v1.30.1 docker true true} ...
	I0610 12:07:44.946933    4588 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.159.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:07:44.956339    4588 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 12:07:44.991381    4588 command_runner.go:130] > cgroupfs
	I0610 12:07:44.992435    4588 cni.go:84] Creating CNI manager for ""
	I0610 12:07:44.992435    4588 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 12:07:44.992435    4588 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 12:07:44.992562    4588 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.159.171 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-813300 NodeName:multinode-813300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.159.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.159.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 12:07:44.992992    4588 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.159.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-813300"
	  kubeletExtraArgs:
	    node-ip: 172.17.159.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.159.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 12:07:45.005272    4588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:07:45.024093    4588 command_runner.go:130] > kubeadm
	I0610 12:07:45.024093    4588 command_runner.go:130] > kubectl
	I0610 12:07:45.024093    4588 command_runner.go:130] > kubelet
	I0610 12:07:45.024093    4588 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 12:07:45.037363    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 12:07:45.055298    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0610 12:07:45.086932    4588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:07:45.118552    4588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0610 12:07:45.162013    4588 ssh_runner.go:195] Run: grep 172.17.159.171	control-plane.minikube.internal$ /etc/hosts
	I0610 12:07:45.168121    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:07:45.202562    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:45.425101    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:07:45.455626    4588 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.159.171
	I0610 12:07:45.455626    4588 certs.go:194] generating shared ca certs ...
	I0610 12:07:45.455747    4588 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.456562    4588 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:07:45.456877    4588 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:07:45.457049    4588 certs.go:256] generating profile certs ...
	I0610 12:07:45.457786    4588 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key
	I0610 12:07:45.457868    4588 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.crt with IP's: []
	I0610 12:07:45.708342    4588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.crt ...
	I0610 12:07:45.708342    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.crt: {Name:mk54c1a1cec89ed140bb491b38817a3186ba7310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.709853    4588 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key ...
	I0610 12:07:45.709853    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key: {Name:mkf00743da8bbcad3b010f0cbb5cd0436ce14710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.710226    4588 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887
	I0610 12:07:45.710226    4588 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.159.171]
	I0610 12:07:45.907956    4588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887 ...
	I0610 12:07:45.907956    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887: {Name:mka8c1bb2a2baa00cc0af3681bd930d57ff75330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.909711    4588 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887 ...
	I0610 12:07:45.909711    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887: {Name:mkb18584b7bb3bb732e73307ae39bca648c3c22a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.910791    4588 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt
	I0610 12:07:45.926670    4588 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key
	I0610 12:07:45.927884    4588 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key
	I0610 12:07:45.928002    4588 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt with IP's: []
	I0610 12:07:46.173843    4588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt ...
	I0610 12:07:46.173843    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt: {Name:mkb418cf9d8991e80905755cce3c6f6de1ae9ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:46.174831    4588 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key ...
	I0610 12:07:46.174831    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key: {Name:mk51867a74a39076c910c5b47bfa2ded184ede24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:46.175803    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:07:46.175803    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 12:07:46.186849    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 12:07:46.187823    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:07:46.187823    4588 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:07:46.187823    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:07:46.187823    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:07:46.188815    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:07:46.188815    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:07:46.188815    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:07:46.188815    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:07:46.188815    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.189810    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.192830    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:07:46.241117    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:07:46.288030    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:07:46.335188    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:07:46.376270    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 12:07:46.423248    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 12:07:46.475484    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 12:07:46.527362    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 12:07:46.576727    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:07:46.624358    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:07:46.675098    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:07:46.722137    4588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 12:07:46.780283    4588 ssh_runner.go:195] Run: openssl version
	I0610 12:07:46.789810    4588 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:07:46.800778    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:07:46.837222    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.844961    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.845084    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.859483    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.867918    4588 command_runner.go:130] > b5213941
	I0610 12:07:46.882717    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:07:46.919428    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:07:46.952808    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.958882    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.958882    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.971190    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.980429    4588 command_runner.go:130] > 51391683
	I0610 12:07:46.998007    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:07:47.035525    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:07:47.070284    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.077578    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.078136    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.091592    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.100124    4588 command_runner.go:130] > 3ec20f2e
	I0610 12:07:47.115904    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:07:47.147726    4588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:07:47.154748    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:07:47.154748    4588 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:07:47.156073    4588 kubeadm.go:391] StartCluster: {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:07:47.164675    4588 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 12:07:47.200694    4588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 12:07:47.220824    4588 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0610 12:07:47.220824    4588 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0610 12:07:47.220824    4588 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0610 12:07:47.236087    4588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 12:07:47.265597    4588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:07:47.286023    4588 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:07:47.286107    4588 kubeadm.go:156] found existing configuration files:
	
	I0610 12:07:47.298886    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 12:07:47.316688    4588 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:07:47.317271    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:07:47.332217    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 12:07:47.363611    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 12:07:47.381321    4588 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:07:47.381903    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:07:47.393546    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 12:07:47.423937    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 12:07:47.440026    4588 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:07:47.440026    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:07:47.459787    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 12:07:47.496088    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 12:07:47.517579    4588 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:07:47.517579    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:07:47.528796    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 12:07:47.546992    4588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 12:07:47.980483    4588 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:07:47.980577    4588 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:08:01.301108    4588 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 12:08:01.301202    4588 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0610 12:08:01.301289    4588 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 12:08:01.301289    4588 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 12:08:01.301289    4588 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:08:01.301289    4588 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:08:01.301289    4588 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:08:01.301289    4588 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:08:01.302226    4588 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:08:01.302295    4588 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:08:01.302295    4588 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:08:01.302295    4588 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:08:01.305130    4588 out.go:204]   - Generating certificates and keys ...
	I0610 12:08:01.305388    4588 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 12:08:01.305388    4588 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 12:08:01.305588    4588 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 12:08:01.305588    4588 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 12:08:01.305751    4588 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 12:08:01.305751    4588 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 12:08:01.306003    4588 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0610 12:08:01.306003    4588 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 12:08:01.306299    4588 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 12:08:01.306299    4588 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0610 12:08:01.307259    4588 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.307345    4588 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.307672    4588 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 12:08:01.307672    4588 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 12:08:01.307672    4588 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:08:01.307672    4588 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:08:01.308946    4588 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:08:01.308946    4588 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:08:01.308946    4588 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:08:01.308946    4588 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:08:01.308946    4588 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:08:01.309472    4588 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:08:01.312844    4588 out.go:204]   - Booting up control plane ...
	I0610 12:08:01.312844    4588 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:08:01.313599    4588 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:08:01.313744    4588 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:08:01.313744    4588 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:08:01.313744    4588 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:08:01.313744    4588 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:08:01.314297    4588 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:08:01.314351    4588 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:08:01.314536    4588 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:08:01.314536    4588 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:08:01.314536    4588 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 12:08:01.314536    4588 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 12:08:01.315111    4588 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:08:01.315111    4588 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:08:01.315111    4588 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:08:01.315111    4588 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:08:01.315111    4588 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002683261s
	I0610 12:08:01.315111    4588 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002683261s
	I0610 12:08:01.315111    4588 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:08:01.315111    4588 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:08:01.315955    4588 command_runner.go:130] > [api-check] The API server is healthy after 7.00192s
	I0610 12:08:01.316020    4588 kubeadm.go:309] [api-check] The API server is healthy after 7.00192s
	I0610 12:08:01.316205    4588 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:08:01.316285    4588 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:08:01.316552    4588 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:08:01.316552    4588 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:08:01.316784    4588 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:08:01.316861    4588 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:08:01.317080    4588 kubeadm.go:309] [mark-control-plane] Marking the node multinode-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:08:01.317295    4588 command_runner.go:130] > [mark-control-plane] Marking the node multinode-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:08:01.317406    4588 kubeadm.go:309] [bootstrap-token] Using token: d6w50f.8d5fdo5xwqangh2s
	I0610 12:08:01.317406    4588 command_runner.go:130] > [bootstrap-token] Using token: d6w50f.8d5fdo5xwqangh2s
	I0610 12:08:01.321841    4588 out.go:204]   - Configuring RBAC rules ...
	I0610 12:08:01.322484    4588 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:08:01.322549    4588 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:08:01.322728    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:08:01.322728    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:08:01.323029    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:08:01.323029    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:08:01.323184    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:08:01.323184    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:08:01.323458    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:08:01.323458    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:08:01.323458    4588 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:08:01.323458    4588 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:08:01.323458    4588 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:08:01.323458    4588 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:08:01.323458    4588 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 12:08:01.323458    4588 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 12:08:01.323458    4588 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 12:08:01.323458    4588 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 12:08:01.323458    4588 kubeadm.go:309] 
	I0610 12:08:01.323458    4588 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0610 12:08:01.323458    4588 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 12:08:01.324750    4588 kubeadm.go:309] 
	I0610 12:08:01.324822    4588 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0610 12:08:01.324822    4588 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 12:08:01.324822    4588 kubeadm.go:309] 
	I0610 12:08:01.324822    4588 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0610 12:08:01.324822    4588 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 12:08:01.324822    4588 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:08:01.324822    4588 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:08:01.325344    4588 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:08:01.325383    4588 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:08:01.325383    4588 kubeadm.go:309] 
	I0610 12:08:01.325530    4588 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 12:08:01.325530    4588 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0610 12:08:01.325530    4588 kubeadm.go:309] 
	I0610 12:08:01.325530    4588 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:08:01.325530    4588 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:08:01.325530    4588 kubeadm.go:309] 
	I0610 12:08:01.325530    4588 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 12:08:01.325530    4588 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0610 12:08:01.325530    4588 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:08:01.326068    4588 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:08:01.326160    4588 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:08:01.326160    4588 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:08:01.326160    4588 kubeadm.go:309] 
	I0610 12:08:01.326435    4588 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:08:01.326435    4588 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:08:01.326712    4588 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 12:08:01.326712    4588 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0610 12:08:01.326712    4588 kubeadm.go:309] 
	I0610 12:08:01.327011    4588 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.327011    4588 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.327428    4588 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 \
	I0610 12:08:01.327428    4588 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 \
	I0610 12:08:01.327428    4588 kubeadm.go:309] 	--control-plane 
	I0610 12:08:01.327574    4588 command_runner.go:130] > 	--control-plane 
	I0610 12:08:01.327574    4588 kubeadm.go:309] 
	I0610 12:08:01.327749    4588 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:08:01.327749    4588 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:08:01.327749    4588 kubeadm.go:309] 
	I0610 12:08:01.327914    4588 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.327914    4588 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.328143    4588 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 12:08:01.328143    4588 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 12:08:01.328143    4588 cni.go:84] Creating CNI manager for ""
	I0610 12:08:01.328143    4588 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 12:08:01.330463    4588 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 12:08:01.347784    4588 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 12:08:01.356731    4588 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 12:08:01.356776    4588 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0610 12:08:01.356776    4588 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0610 12:08:01.356776    4588 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 12:08:01.356776    4588 command_runner.go:130] > Access: 2024-06-10 12:05:58.512184000 +0000
	I0610 12:08:01.356776    4588 command_runner.go:130] > Modify: 2024-06-06 15:35:25.000000000 +0000
	I0610 12:08:01.356867    4588 command_runner.go:130] > Change: 2024-06-10 12:05:49.137000000 +0000
	I0610 12:08:01.356867    4588 command_runner.go:130] >  Birth: -
	I0610 12:08:01.356957    4588 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 12:08:01.357012    4588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 12:08:01.407001    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 12:08:01.826713    4588 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0610 12:08:01.826713    4588 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0610 12:08:01.826713    4588 command_runner.go:130] > serviceaccount/kindnet created
	I0610 12:08:01.826713    4588 command_runner.go:130] > daemonset.apps/kindnet created
	I0610 12:08:01.826855    4588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:08:01.841874    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-813300 minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=multinode-813300 minikube.k8s.io/primary=true
	I0610 12:08:01.841874    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:01.858654    4588 command_runner.go:130] > -16
	I0610 12:08:01.858754    4588 ops.go:34] apiserver oom_adj: -16
	I0610 12:08:02.040074    4588 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0610 12:08:02.040074    4588 command_runner.go:130] > node/multinode-813300 labeled
	I0610 12:08:02.055746    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:02.215756    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:02.564403    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:02.693633    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:03.066156    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:03.182182    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:03.552354    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:03.668708    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:04.061778    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:04.182269    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:04.561683    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:04.679824    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:05.065077    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:05.178135    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:05.563037    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:05.683240    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:06.069595    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:06.198551    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:06.567615    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:06.687919    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:07.059024    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:07.199437    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:07.559042    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:07.674044    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:08.065565    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:08.190015    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:08.564648    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:08.688052    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:09.069032    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:09.202107    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:09.560025    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:09.676786    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:10.062974    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:10.186607    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:10.564610    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:10.698529    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:11.060307    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:11.191152    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:11.563418    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:11.690517    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:12.054085    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:12.189950    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:12.562729    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:12.677893    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:13.067953    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:13.195579    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:13.558883    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:13.682493    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:14.061302    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:14.183257    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:14.567678    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:14.763665    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:15.056289    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:15.186893    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:15.564117    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:15.696782    4588 command_runner.go:130] > NAME      SECRETS   AGE
	I0610 12:08:15.696824    4588 command_runner.go:130] > default   0         0s
	I0610 12:08:15.696888    4588 kubeadm.go:1107] duration metric: took 13.8699211s to wait for elevateKubeSystemPrivileges
	W0610 12:08:15.696888    4588 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 12:08:15.696888    4588 kubeadm.go:393] duration metric: took 28.5406976s to StartCluster
	I0610 12:08:15.696888    4588 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:08:15.696888    4588 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:15.699411    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:08:15.700711    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 12:08:15.700711    4588 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 12:08:15.704964    4588 out.go:177] * Verifying Kubernetes components...
	I0610 12:08:15.700711    4588 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:08:15.701382    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:08:15.707565    4588 addons.go:69] Setting storage-provisioner=true in profile "multinode-813300"
	I0610 12:08:15.707565    4588 addons.go:69] Setting default-storageclass=true in profile "multinode-813300"
	I0610 12:08:15.707565    4588 addons.go:234] Setting addon storage-provisioner=true in "multinode-813300"
	I0610 12:08:15.707565    4588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-813300"
	I0610 12:08:15.707565    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:08:15.708184    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:15.709164    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:15.721781    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:08:16.014416    4588 command_runner.go:130] > apiVersion: v1
	I0610 12:08:16.014416    4588 command_runner.go:130] > data:
	I0610 12:08:16.014416    4588 command_runner.go:130] >   Corefile: |
	I0610 12:08:16.014416    4588 command_runner.go:130] >     .:53 {
	I0610 12:08:16.014416    4588 command_runner.go:130] >         errors
	I0610 12:08:16.014416    4588 command_runner.go:130] >         health {
	I0610 12:08:16.014416    4588 command_runner.go:130] >            lameduck 5s
	I0610 12:08:16.014416    4588 command_runner.go:130] >         }
	I0610 12:08:16.014416    4588 command_runner.go:130] >         ready
	I0610 12:08:16.014416    4588 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0610 12:08:16.014416    4588 command_runner.go:130] >            pods insecure
	I0610 12:08:16.014416    4588 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0610 12:08:16.014416    4588 command_runner.go:130] >            ttl 30
	I0610 12:08:16.014416    4588 command_runner.go:130] >         }
	I0610 12:08:16.014416    4588 command_runner.go:130] >         prometheus :9153
	I0610 12:08:16.014416    4588 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0610 12:08:16.014416    4588 command_runner.go:130] >            max_concurrent 1000
	I0610 12:08:16.014416    4588 command_runner.go:130] >         }
	I0610 12:08:16.014416    4588 command_runner.go:130] >         cache 30
	I0610 12:08:16.014416    4588 command_runner.go:130] >         loop
	I0610 12:08:16.014416    4588 command_runner.go:130] >         reload
	I0610 12:08:16.014416    4588 command_runner.go:130] >         loadbalance
	I0610 12:08:16.014416    4588 command_runner.go:130] >     }
	I0610 12:08:16.014416    4588 command_runner.go:130] > kind: ConfigMap
	I0610 12:08:16.014416    4588 command_runner.go:130] > metadata:
	I0610 12:08:16.014416    4588 command_runner.go:130] >   creationTimestamp: "2024-06-10T12:08:00Z"
	I0610 12:08:16.014416    4588 command_runner.go:130] >   name: coredns
	I0610 12:08:16.014416    4588 command_runner.go:130] >   namespace: kube-system
	I0610 12:08:16.014416    4588 command_runner.go:130] >   resourceVersion: "223"
	I0610 12:08:16.014416    4588 command_runner.go:130] >   uid: 6b6b1b18-8340-404c-ad83-066f280bc1b8
	I0610 12:08:16.014416    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 12:08:16.117425    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:08:16.455420    4588 command_runner.go:130] > configmap/coredns replaced
	I0610 12:08:16.455504    4588 start.go:946] {"host.minikube.internal": 172.17.144.1} host record injected into CoreDNS's ConfigMap
	I0610 12:08:16.457151    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:16.457151    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:16.457851    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:08:16.457851    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:08:16.459915    4588 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 12:08:16.460479    4588 node_ready.go:35] waiting up to 6m0s for node "multinode-813300" to be "Ready" ...
	I0610 12:08:16.460479    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:16.460479    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.460479    4588 round_trippers.go:463] GET https://172.17.159.171:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 12:08:16.460479    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.460479    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.460479    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.460479    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.460479    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.477494    4588 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0610 12:08:16.477494    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Audit-Id: 5d9cb475-9eb4-490b-84cb-48947c853346
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.477690    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.477690    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Content-Length: 291
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.477690    4588 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"362","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.477690    4588 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0610 12:08:16.477690    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.478258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.478258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.478258    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.478258    4588 round_trippers.go:580]     Audit-Id: a0a248f5-f010-49bd-be88-f9ce21911653
	I0610 12:08:16.478258    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.478536    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:16.478622    4588 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"362","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.478747    4588 round_trippers.go:463] PUT https://172.17.159.171:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 12:08:16.478747    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.478747    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.478747    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.478747    4588 round_trippers.go:473]     Content-Type: application/json
	I0610 12:08:16.494772    4588 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0610 12:08:16.495065    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.495065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Content-Length: 291
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Audit-Id: d535bcf1-d6e3-4914-8855-21dc33661312
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.495065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.495137    4588 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"364","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.969579    4588 round_trippers.go:463] GET https://172.17.159.171:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 12:08:16.969579    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.969579    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.969579    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:16.969579    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.969579    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.969579    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.969579    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.973208    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:16.973208    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.973208    4588 round_trippers.go:580]     Audit-Id: 72e9d5e3-bcfa-467a-b56b-e353a5261918
	I0610 12:08:16.973208    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.973208    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.973208    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.973665    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.973665    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.973920    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:16.973920    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.974025    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.974025    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.974025    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.974124    4588 round_trippers.go:580]     Content-Length: 291
	I0610 12:08:16.974025    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:16.974124    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.974257    4588 round_trippers.go:580]     Audit-Id: 606c7d1b-8607-486b-901e-1a37f0e7b82a
	I0610 12:08:16.974334    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.974445    4588 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"374","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.974850    4588 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-813300" context rescaled to 1 replicas
	I0610 12:08:17.461815    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:17.461815    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:17.461815    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:17.461815    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:17.466181    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:17.466181    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:17.466181    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:17.466624    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:17.466624    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:17.466624    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:17 GMT
	I0610 12:08:17.466624    4588 round_trippers.go:580]     Audit-Id: f25e967e-f2a6-43d3-b020-a71c67099236
	I0610 12:08:17.466624    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:17.466865    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:17.969784    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:17.969784    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:17.969784    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:17.969784    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:17.973880    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:17.974417    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:17.974417    4588 round_trippers.go:580]     Audit-Id: b880d804-4a72-46ac-a1eb-64811f820ef2
	I0610 12:08:17.974417    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:17.974417    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:17.974505    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:17.974505    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:17.974505    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:17 GMT
	I0610 12:08:17.974850    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:18.148749    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:18.148749    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:18.151774    4588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:08:18.148749    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:18.155349    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:18.155349    4588 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:08:18.155349    4588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 12:08:18.155349    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:18.155769    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:18.156778    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:08:18.157762    4588 addons.go:234] Setting addon default-storageclass=true in "multinode-813300"
	I0610 12:08:18.157762    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:08:18.158791    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:18.463954    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:18.464224    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:18.464224    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:18.464224    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:18.468817    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:18.468866    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:18.468866    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:18.468866    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:18 GMT
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Audit-Id: 08ba8b87-2ebe-4b1a-9bc7-7fc5017e34d1
	I0610 12:08:18.469449    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:18.469798    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:18.972076    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:18.972076    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:18.972076    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:18.972076    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:18.975651    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:18.975651    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:18.976021    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:18.976021    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:18 GMT
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Audit-Id: 9c65fa4d-0b55-4681-a48a-3b1a4dbb54ce
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:18.976441    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:19.462801    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:19.462801    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:19.462801    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:19.462801    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:19.466510    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:19.466510    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Audit-Id: 71bb3ada-5b1d-4303-8b49-627cb8297316
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:19.467506    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:19.467506    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:19 GMT
	I0610 12:08:19.467506    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:19.971420    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:19.971420    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:19.971517    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:19.971517    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:19.974973    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:19.974973    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:19.974973    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:19.974973    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:19.975460    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:19.975460    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:19 GMT
	I0610 12:08:19.975460    4588 round_trippers.go:580]     Audit-Id: 8cd747ea-2235-458e-8465-b8e6dd798dc6
	I0610 12:08:19.975460    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:19.975966    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:20.464847    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:20.465278    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:20.465387    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:20.465387    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:20.469653    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:20.469653    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:20.469653    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:20.469653    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:20 GMT
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Audit-Id: 77e2b9d7-6f2e-498f-b2b6-39850d5cf023
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:20.470875    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:20.471154    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:20.673653    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:20.673741    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:20.673874    4588 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 12:08:20.673874    4588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 12:08:20.673943    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:20.675325    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:20.675325    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:20.675325    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:08:20.971415    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:20.971628    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:20.971628    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:20.971628    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:20.977135    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:20.977726    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Audit-Id: 85b2432c-b255-446d-91a8-0de43d9b76ca
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:20.977726    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:20.977726    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:20 GMT
	I0610 12:08:20.978131    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:21.462028    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:21.462138    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:21.462213    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:21.462213    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:21.465088    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:21.465888    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:21.465888    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:21.465888    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:21.465888    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:21.466013    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:21.466013    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:21 GMT
	I0610 12:08:21.466013    4588 round_trippers.go:580]     Audit-Id: 8e7bfa2d-47b3-45cc-a081-3540ba8a26c7
	I0610 12:08:21.466463    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:21.972657    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:21.972657    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:21.972657    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:21.972657    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:21.977058    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:21.977058    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:21.977134    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:21.977134    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:21.977134    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:21.977134    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:21.977218    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:21 GMT
	I0610 12:08:21.977218    4588 round_trippers.go:580]     Audit-Id: 09c72934-2b71-461a-b4fd-0e14aaaf73b0
	I0610 12:08:21.977477    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:22.465513    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:22.465513    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:22.465581    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:22.465581    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:22.468907    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:22.468907    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:22.468907    4588 round_trippers.go:580]     Audit-Id: 046c63ca-5191-4136-ba48-0368a7e8d11c
	I0610 12:08:22.468907    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:22.468907    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:22.469891    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:22.469891    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:22.469891    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:22 GMT
	I0610 12:08:22.469891    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:22.972701    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:22.972701    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:22.972701    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:22.972701    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:22.976321    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:22.976321    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:22.976321    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:22.976321    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:22 GMT
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Audit-Id: cbf84943-c01b-45e1-b8d0-c6fbf9f578a4
	I0610 12:08:22.977441    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:22.977790    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:23.167919    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:23.168510    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:23.168510    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:08:23.467192    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:23.467263    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:23.467263    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:23.467263    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:23.470722    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:23.471197    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:23.471197    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:23 GMT
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Audit-Id: 15d64748-9238-483a-8170-ffc83f1d908d
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:23.471197    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:23.471538    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:23.612259    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:08:23.612340    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:23.612790    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:08:23.770726    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:08:23.973469    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:24.067126    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:24.067126    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:24.067126    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:24.071456    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:24.071456    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Audit-Id: 3f7761c1-775f-479a-926e-e6e225ae5297
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:24.071456    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:24.071456    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:24 GMT
	I0610 12:08:24.071917    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:24.381409    4588 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0610 12:08:24.381600    4588 command_runner.go:130] > pod/storage-provisioner created
	I0610 12:08:24.466424    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:24.466616    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:24.466616    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:24.466616    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:24.469640    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:24.471213    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Audit-Id: 644ee470-8778-4b97-ade1-3d396880a3eb
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:24.471250    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:24.471250    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:24 GMT
	I0610 12:08:24.471668    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:24.975984    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:24.975984    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:24.976290    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:24.976290    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:24.979743    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:24.979743    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Audit-Id: 577d1627-ffbf-4769-b31e-54336e194420
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:24.979743    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:24.979743    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:24 GMT
	I0610 12:08:24.980589    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:24.981314    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:25.467082    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:25.467082    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:25.467082    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:25.467405    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:25.471429    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:25.471429    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:25.471429    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:25.471429    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:25 GMT
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Audit-Id: b22a7791-024c-48c8-a3d0-60f86c7bd039
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:25.471826    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:25.970625    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:25.970625    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:25.970625    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:25.970625    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:25.975518    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:25.975586    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:25.975586    4588 round_trippers.go:580]     Audit-Id: d71153d6-4e44-462d-ae60-2161aced6f71
	I0610 12:08:25.975586    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:25.975668    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:25.975668    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:25.975668    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:25.975668    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:25 GMT
	I0610 12:08:25.975668    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:26.019285    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:08:26.019893    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:26.020248    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:08:26.163944    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 12:08:26.337920    4588 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0610 12:08:26.338319    4588 round_trippers.go:463] GET https://172.17.159.171:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 12:08:26.338580    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.338580    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.338704    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.349001    4588 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 12:08:26.350011    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.350011    4588 round_trippers.go:580]     Audit-Id: 6617c405-50a5-4bfc-aadb-527dd013680d
	I0610 12:08:26.350011    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.350011    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.350063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.350063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.350063    4588 round_trippers.go:580]     Content-Length: 1273
	I0610 12:08:26.350063    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.350188    4588 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"3c2bb998-bd12-48de-88bb-ef852d4ef17b","resourceVersion":"402","creationTimestamp":"2024-06-10T12:08:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T12:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0610 12:08:26.351049    4588 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3c2bb998-bd12-48de-88bb-ef852d4ef17b","resourceVersion":"402","creationTimestamp":"2024-06-10T12:08:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T12:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 12:08:26.351165    4588 round_trippers.go:463] PUT https://172.17.159.171:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 12:08:26.351165    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.351165    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.351165    4588 round_trippers.go:473]     Content-Type: application/json
	I0610 12:08:26.351231    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.354220    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:26.354220    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Audit-Id: 3328ace5-f8a9-432f-95d6-2e022f2f96ba
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.355159    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.355159    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.355159    4588 round_trippers.go:580]     Content-Length: 1220
	I0610 12:08:26.355159    4588 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3c2bb998-bd12-48de-88bb-ef852d4ef17b","resourceVersion":"402","creationTimestamp":"2024-06-10T12:08:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T12:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 12:08:26.359449    4588 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 12:08:26.362054    4588 addons.go:510] duration metric: took 10.6612568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0610 12:08:26.472340    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:26.472340    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.472340    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.472340    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.476989    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:26.476989    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.476989    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.476989    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.477437    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.477437    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.477437    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.477437    4588 round_trippers.go:580]     Audit-Id: 2d7eac79-25bf-4e84-bec6-871d0084a72d
	I0610 12:08:26.477671    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:26.973673    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:26.973888    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.973888    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.973888    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.977273    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:26.977273    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.977273    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.977273    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.977273    4588 round_trippers.go:580]     Audit-Id: 4981fd01-235e-4c9f-9367-3a7de9313d0e
	I0610 12:08:26.977273    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.978045    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.978045    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.978205    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:27.462245    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:27.462245    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:27.462245    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:27.462340    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:27.467699    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:27.467699    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:27.467825    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:27.467825    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:27 GMT
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Audit-Id: c9d0a77d-a57e-4d70-84a2-e398f5ffa765
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:27.468099    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:27.469115    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:27.960920    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:27.960920    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:27.960920    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:27.960920    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:27.965654    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:27.965654    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:27.965654    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:27.965654    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:27.965654    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:27.966150    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:27 GMT
	I0610 12:08:27.966150    4588 round_trippers.go:580]     Audit-Id: 5d8201db-b32c-4acf-8ad6-345335bd6d2d
	I0610 12:08:27.966150    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:27.966354    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:28.474445    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:28.474445    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:28.474445    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:28.474445    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:28.482343    4588 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:08:28.482431    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:28.482431    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:28.482431    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:28 GMT
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Audit-Id: 31e80831-1c73-4c80-b784-0f1dce4ba371
	I0610 12:08:28.482431    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:28.961355    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:28.961600    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:28.961600    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:28.961600    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:28.965419    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:28.965419    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Audit-Id: 0bdf6c06-0223-405f-8706-dfbe77e36c8b
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:28.965419    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:28.965419    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:28 GMT
	I0610 12:08:28.966753    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:29.464161    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:29.464216    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:29.464216    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:29.464216    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:29.468789    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:29.468789    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:29 GMT
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Audit-Id: a565b77c-b1b9-4089-8623-2c276f67440d
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:29.469063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:29.469063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:29.469412    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:29.469971    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:29.962498    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:29.962498    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:29.962498    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:29.962498    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:29.967420    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:29.967881    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:29.967881    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:29.967881    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:29 GMT
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Audit-Id: 16709f5b-fb80-40b1-a6e2-9fdc0e2c33b6
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:29.967881    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:30.466094    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:30.466389    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.466389    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.466451    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.473102    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:08:30.473102    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.473102    4588 round_trippers.go:580]     Audit-Id: b7b3666c-e49c-4427-9cde-6abd578e055f
	I0610 12:08:30.473102    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.473102    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.473376    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.473376    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.473376    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:30 GMT
	I0610 12:08:30.473554    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:30.971452    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:30.971452    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.971586    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.971586    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.974265    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:30.974265    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.975194    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.975194    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:30 GMT
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Audit-Id: ee0e8e0f-291b-4fd4-a42f-a1ec6d75fd51
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.975506    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"407","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0610 12:08:30.975734    4588 node_ready.go:49] node "multinode-813300" has status "Ready":"True"
	I0610 12:08:30.975734    4588 node_ready.go:38] duration metric: took 14.5151365s for node "multinode-813300" to be "Ready" ...
	I0610 12:08:30.975734    4588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:08:30.975734    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:30.975734    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.975734    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.975734    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.981306    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:30.981425    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.981425    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:30 GMT
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Audit-Id: 938fb101-b66e-4d12-9cf6-8a418d730def
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.981425    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.982695    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0610 12:08:30.987017    4588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:30.987017    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:30.987017    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.987017    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.987017    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.991014    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:30.991014    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.991014    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.991014    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.991014    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.991014    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.991650    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:30.991650    4588 round_trippers.go:580]     Audit-Id: a20fe82f-5987-467b-a829-238d7f03bb9d
	I0610 12:08:30.992127    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:30.992583    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:30.992583    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.992583    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.992583    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.995139    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:30.995139    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.995139    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.995139    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.995139    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:30.995139    4588 round_trippers.go:580]     Audit-Id: 67280c1b-dd0e-4dd1-adff-518782aaded3
	I0610 12:08:30.995139    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.995736    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.995736    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"407","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0610 12:08:31.497373    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:31.497442    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.497442    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.497503    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:31.500007    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:31.500007    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Audit-Id: a1858d6a-493d-4307-88c5-562319ac0e90
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:31.500951    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:31.500951    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:31.504473    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:31.505489    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:31.505489    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.505489    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.505489    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:31.511925    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:08:31.512084    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Audit-Id: 75267635-50fe-4afc-8272-36f1623fe090
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:31.512084    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:31.512084    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:31.512456    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:31.989543    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:31.989543    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.989543    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.989543    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:31.993664    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:31.993817    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Audit-Id: ad39aa17-cc09-4f93-bf6b-cdc9adb39955
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:31.993817    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:31.993817    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:31.996841    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:31.997224    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:31.997758    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.997758    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.997758    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.002165    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:32.002165    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.002165    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:32.002165    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:32.002711    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:32.002711    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:32.002711    4588 round_trippers.go:580]     Audit-Id: e13fc67f-b777-4f9b-abfd-1f1127f85080
	I0610 12:08:32.002711    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.002926    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:32.495322    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:32.495503    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:32.495503    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:32.495503    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.499334    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:32.499334    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.499334    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:32.499334    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Audit-Id: 750ca129-89cc-4b31-978b-eb45c8205826
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:32.500108    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:32.500884    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:32.500884    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:32.500939    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:32.500939    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.505349    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:32.505349    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.505349    4588 round_trippers.go:580]     Audit-Id: 4bd7a5b6-e799-44ba-b894-becda2bbf011
	I0610 12:08:32.505349    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.505349    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:32.505349    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:32.505349    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:32.505887    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:32.506152    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:32.995187    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:32.995187    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:32.995187    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:32.995187    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.999219    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:32.999219    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Audit-Id: 58a497a3-7bd3-4807-989d-93a7abd2266d
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.000085    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.000085    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.000226    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0610 12:08:33.001482    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.001482    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.001482    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.001482    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.004802    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:33.004802    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.004802    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.004802    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.004802    4588 round_trippers.go:580]     Audit-Id: b39df611-6465-4b74-a9a3-b939651b43fe
	I0610 12:08:33.004802    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.005828    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.005974    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.006340    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.006877    4588 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.006877    4588 pod_ready.go:81] duration metric: took 2.0198434s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.006932    4588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.007046    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:08:33.007046    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.007046    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.007094    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.009577    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.009577    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.009577    4588 round_trippers.go:580]     Audit-Id: 76096531-167d-4f83-bd03-e7713e1e8d9d
	I0610 12:08:33.009577    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.009577    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.010082    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.010082    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.010082    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.010082    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"e48af956-8533-4b8e-be5d-0834484cbffa","resourceVersion":"385","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.159.171:2379","kubernetes.io/config.hash":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.mirror":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.seen":"2024-06-10T12:08:00.781973961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0610 12:08:33.010556    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.010556    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.010556    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.010556    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.013440    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.013440    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.013440    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.013440    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Audit-Id: 6b040327-de96-49d5-8e30-1c94f19e6445
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.014281    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.014698    4588 pod_ready.go:92] pod "etcd-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.014698    4588 pod_ready.go:81] duration metric: took 7.7654ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.014760    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.014878    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:08:33.014878    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.014908    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.014908    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.019251    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:33.019385    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.019385    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.019385    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Audit-Id: a56c64cd-4b78-4ec4-b317-d23c5bd91346
	I0610 12:08:33.019916    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"f824b391-b3d2-49ec-ba7d-863cb2150f81","resourceVersion":"386","creationTimestamp":"2024-06-10T12:07:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.159.171:8443","kubernetes.io/config.hash":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.mirror":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.seen":"2024-06-10T12:07:52.425003820Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:07:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0610 12:08:33.020589    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.020695    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.020695    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.020695    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.024226    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:33.024226    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Audit-Id: ba42cb6f-0b20-475d-81bb-08c0c2b424c1
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.024226    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.024226    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.024787    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.025075    4588 pod_ready.go:92] pod "kube-apiserver-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.025075    4588 pod_ready.go:81] duration metric: took 10.3143ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.025075    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.025075    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:08:33.025075    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.025075    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.025075    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.027688    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.027688    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Audit-Id: 627bf56d-7d78-4898-b65b-7e67c35b4b59
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.027688    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.027688    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.028800    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"384","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0610 12:08:33.029481    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.029481    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.029481    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.029481    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.031724    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.031724    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Audit-Id: 545f7fb9-5389-46a1-9ca7-54eea814ce0e
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.031724    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.031724    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.032537    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.033863    4588 pod_ready.go:92] pod "kube-controller-manager-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.034008    4588 pod_ready.go:81] duration metric: took 8.9332ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.034008    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.034008    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:08:33.034008    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.034008    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.034229    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.036496    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.036496    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.036496    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Audit-Id: 711cf59f-d3e3-4f21-a5db-187fe7f58c13
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.036496    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.036496    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"380","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0610 12:08:33.037906    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.037952    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.038071    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.038071    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.040362    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.040362    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.040362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Audit-Id: e6d94a88-bd9f-4626-b1c3-879d50c77dd8
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.040362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.041393    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.041808    4588 pod_ready.go:92] pod "kube-proxy-nrpvt" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.041808    4588 pod_ready.go:81] duration metric: took 7.8004ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.041877    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.195916    4588 request.go:629] Waited for 154.0375ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:08:33.196165    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:08:33.196165    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.196165    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.196232    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.202934    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:08:33.203372    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.203372    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.203372    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.203372    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.203439    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.203439    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.203439    4588 round_trippers.go:580]     Audit-Id: 3370c09f-361f-45e5-a7c2-7da8cdbd9831
	I0610 12:08:33.203622    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"387","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0610 12:08:33.400282    4588 request.go:629] Waited for 195.7136ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.400649    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.400673    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.400673    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.400673    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.403562    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.403562    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Audit-Id: dec8d733-a395-4375-9e53-c5161847aeac
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.403562    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.403562    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.404668    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.405082    4588 pod_ready.go:92] pod "kube-scheduler-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.405082    4588 pod_ready.go:81] duration metric: took 363.2018ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.405082    4588 pod_ready.go:38] duration metric: took 2.4293279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:08:33.405082    4588 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:08:33.419788    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:08:33.450414    4588 command_runner.go:130] > 1957
	I0610 12:08:33.450668    4588 api_server.go:72] duration metric: took 17.7498125s to wait for apiserver process to appear ...
	I0610 12:08:33.450668    4588 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:08:33.450668    4588 api_server.go:253] Checking apiserver healthz at https://172.17.159.171:8443/healthz ...
	I0610 12:08:33.458286    4588 api_server.go:279] https://172.17.159.171:8443/healthz returned 200:
	ok
	I0610 12:08:33.458286    4588 round_trippers.go:463] GET https://172.17.159.171:8443/version
	I0610 12:08:33.458286    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.458286    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.458286    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.462485    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:33.462485    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.462485    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.462485    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Content-Length: 263
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Audit-Id: 16c16afd-0fbc-487c-ad2f-457898147096
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.463107    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.463107    4588 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 12:08:33.463254    4588 api_server.go:141] control plane version: v1.30.1
	I0610 12:08:33.463254    4588 api_server.go:131] duration metric: took 12.5864ms to wait for apiserver health ...
	I0610 12:08:33.463316    4588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:08:33.605309    4588 request.go:629] Waited for 141.9539ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:33.605546    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:33.605546    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.605546    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.605546    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.611373    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:33.612010    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.612010    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.612080    4588 round_trippers.go:580]     Audit-Id: 8601551f-3309-4d3c-a243-c54f622ba627
	I0610 12:08:33.612080    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.612080    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.612080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.612080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.613396    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0610 12:08:33.616317    4588 system_pods.go:59] 8 kube-system pods found
	I0610 12:08:33.616317    4588 system_pods.go:61] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "etcd-multinode-813300" [e48af956-8533-4b8e-be5d-0834484cbffa] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-apiserver-multinode-813300" [f824b391-b3d2-49ec-ba7d-863cb2150f81] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:08:33.616317    4588 system_pods.go:74] duration metric: took 153.0001ms to wait for pod list to return data ...
	I0610 12:08:33.616317    4588 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:08:33.808138    4588 request.go:629] Waited for 191.1567ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/default/serviceaccounts
	I0610 12:08:33.808225    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/default/serviceaccounts
	I0610 12:08:33.808225    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.808225    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.808225    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.813003    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:33.813365    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Content-Length: 261
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Audit-Id: 53fadb3a-0bcd-4518-aaa6-0171143260ed
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.813365    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.813365    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.813459    4588 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2033967b-ff48-4641-b518-45705bf023c6","resourceVersion":"336","creationTimestamp":"2024-06-10T12:08:15Z"}}]}
	I0610 12:08:33.813646    4588 default_sa.go:45] found service account: "default"
	I0610 12:08:33.813646    4588 default_sa.go:55] duration metric: took 197.3272ms for default service account to be created ...
	I0610 12:08:33.813646    4588 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:08:34.013591    4588 request.go:629] Waited for 199.9428ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:34.013591    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:34.013591    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:34.013591    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:34.013591    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:34.019566    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:34.019566    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Audit-Id: ccddedc7-4912-4f64-a5db-e857ae601e77
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:34.019566    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:34.019566    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:34 GMT
	I0610 12:08:34.022328    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0610 12:08:34.025311    4588 system_pods.go:86] 8 kube-system pods found
	I0610 12:08:34.025311    4588 system_pods.go:89] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "etcd-multinode-813300" [e48af956-8533-4b8e-be5d-0834484cbffa] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-apiserver-multinode-813300" [f824b391-b3d2-49ec-ba7d-863cb2150f81] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:08:34.025447    4588 system_pods.go:126] duration metric: took 211.7988ms to wait for k8s-apps to be running ...
	I0610 12:08:34.025531    4588 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:08:34.036640    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:08:34.068920    4588 system_svc.go:56] duration metric: took 43.0864ms WaitForService to wait for kubelet
	I0610 12:08:34.068920    4588 kubeadm.go:576] duration metric: took 18.3680596s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:08:34.068920    4588 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:08:34.200619    4588 request.go:629] Waited for 131.5276ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes
	I0610 12:08:34.200701    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes
	I0610 12:08:34.200763    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:34.200763    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:34.200763    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:34.204676    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:34.204676    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:34.204676    4588 round_trippers.go:580]     Audit-Id: f224ea65-0cb9-4a1e-8a42-23d61494a02a
	I0610 12:08:34.204676    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:34.205255    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:34.205255    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:34.205255    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:34.205255    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:34 GMT
	I0610 12:08:34.205556    4588 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5013 chars]
	I0610 12:08:34.206165    4588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:08:34.206219    4588 node_conditions.go:123] node cpu capacity is 2
	I0610 12:08:34.206219    4588 node_conditions.go:105] duration metric: took 137.298ms to run NodePressure ...
	I0610 12:08:34.206273    4588 start.go:240] waiting for startup goroutines ...
	I0610 12:08:34.206302    4588 start.go:245] waiting for cluster config update ...
	I0610 12:08:34.206396    4588 start.go:254] writing updated cluster config ...
	I0610 12:08:34.210462    4588 out.go:177] 
	I0610 12:08:34.211951    4588 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:08:34.219370    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:08:34.219370    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:08:34.230682    4588 out.go:177] * Starting "multinode-813300-m02" worker node in "multinode-813300" cluster
	I0610 12:08:34.232875    4588 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:08:34.232875    4588 cache.go:56] Caching tarball of preloaded images
	I0610 12:08:34.232875    4588 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:08:34.232875    4588 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:08:34.233735    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:08:34.236944    4588 start.go:360] acquireMachinesLock for multinode-813300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:08:34.236944    4588 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300-m02"
	I0610 12:08:34.237615    4588 start.go:93] Provisioning new machine with config: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:08:34.237615    4588 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0610 12:08:34.239702    4588 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 12:08:34.239702    4588 start.go:159] libmachine.API.Create for "multinode-813300" (driver="hyperv")
	I0610 12:08:34.240395    4588 client.go:168] LocalClient.Create starting
	I0610 12:08:34.240700    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 12:08:34.240700    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:08:34.241203    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:08:34.241370    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 12:08:34.241548    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:08:34.241548    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:08:34.241738    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 12:08:36.262319    4588 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 12:08:36.262385    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:36.262385    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 12:08:38.140816    4588 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 12:08:38.141270    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:38.141270    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:08:39.735536    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:08:39.735536    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:39.735536    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:08:43.725162    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:08:43.725162    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:43.727495    4588 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:08:44.236510    4588 main.go:141] libmachine: Creating SSH key...
	I0610 12:08:44.388057    4588 main.go:141] libmachine: Creating VM...
	I0610 12:08:44.388057    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:08:47.561217    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:08:47.561217    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:47.561217    4588 main.go:141] libmachine: Using switch "Default Switch"
	I0610 12:08:47.561217    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:08:49.510281    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:08:49.510430    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:49.510430    4588 main.go:141] libmachine: Creating VHD
	I0610 12:08:49.510430    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 12:08:53.452049    4588 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 20794A7E-9F85-4605-9CFB-9AB5A2243F5C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 12:08:53.452049    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:53.452049    4588 main.go:141] libmachine: Writing magic tar header
	I0610 12:08:53.452049    4588 main.go:141] libmachine: Writing SSH key tar header
	I0610 12:08:53.463808    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 12:08:56.776237    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:08:56.776237    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:56.776915    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\disk.vhd' -SizeBytes 20000MB
	I0610 12:08:59.460936    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:08:59.460999    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:59.460999    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-813300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 12:09:03.294382    4588 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-813300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 12:09:03.295386    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:03.295486    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-813300-m02 -DynamicMemoryEnabled $false
	I0610 12:09:05.730826    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:05.731605    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:05.731605    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-813300-m02 -Count 2
	I0610 12:09:08.091225    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:08.091225    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:08.091389    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-813300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\boot2docker.iso'
	I0610 12:09:10.917877    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:10.918532    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:10.918532    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-813300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\disk.vhd'
	I0610 12:09:13.890119    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:13.891006    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:13.891060    4588 main.go:141] libmachine: Starting VM...
	I0610 12:09:13.891060    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300-m02
	I0610 12:09:17.217967    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:17.217967    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:17.218129    4588 main.go:141] libmachine: Waiting for host to start...
	I0610 12:09:17.218287    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:19.673262    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:19.673262    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:19.673574    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:22.445782    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:22.445782    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:23.455957    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:25.876321    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:25.876909    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:25.876979    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:28.622723    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:28.622723    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:29.627749    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:32.027877    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:32.027952    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:32.027991    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:34.791963    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:34.791963    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:35.800230    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:38.203051    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:38.203636    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:38.203636    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:40.963011    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:40.963011    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:41.973628    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:44.416582    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:44.416582    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:44.416582    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:47.254049    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:09:47.254049    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:47.254049    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:49.644892    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:49.644892    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:49.645559    4588 machine.go:94] provisionDockerMachine start ...
	I0610 12:09:49.645788    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:51.995513    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:51.995513    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:51.995513    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:54.722854    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:09:54.722854    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:54.729030    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:09:54.740222    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:09:54.741219    4588 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:09:54.870273    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:09:54.870349    4588 buildroot.go:166] provisioning hostname "multinode-813300-m02"
	I0610 12:09:54.870417    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:57.155923    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:57.156835    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:57.156835    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:59.869088    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:09:59.869870    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:59.876256    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:09:59.876256    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:09:59.876845    4588 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300-m02 && echo "multinode-813300-m02" | sudo tee /etc/hostname
	I0610 12:10:00.036418    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300-m02
	
	I0610 12:10:00.036539    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:02.352338    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:02.352338    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:02.352850    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:05.115922    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:05.116005    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:05.120761    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:05.121019    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:05.121019    4588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:10:05.266489    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:10:05.266489    4588 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:10:05.266489    4588 buildroot.go:174] setting up certificates
	I0610 12:10:05.266489    4588 provision.go:84] configureAuth start
	I0610 12:10:05.266489    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:07.629056    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:07.629289    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:07.629378    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:10.421266    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:10.422131    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:10.422131    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:12.788172    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:12.788347    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:12.788347    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:15.586195    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:15.586195    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:15.586847    4588 provision.go:143] copyHostCerts
	I0610 12:10:15.587004    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:10:15.587261    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:10:15.587261    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:10:15.587727    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:10:15.588865    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:10:15.589171    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:10:15.589171    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:10:15.589536    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:10:15.589840    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:10:15.590722    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:10:15.590722    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:10:15.591178    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:10:15.592371    4588 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300-m02 san=[127.0.0.1 172.17.151.128 localhost minikube multinode-813300-m02]
	I0610 12:10:15.916216    4588 provision.go:177] copyRemoteCerts
	I0610 12:10:15.928750    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:10:15.928750    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:18.250037    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:18.250938    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:18.250996    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:20.970158    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:20.971086    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:20.971674    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:10:21.079420    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1499555s)
	I0610 12:10:21.079420    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:10:21.079775    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:10:21.131679    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:10:21.132137    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 12:10:21.184128    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:10:21.184257    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 12:10:21.239558    4588 provision.go:87] duration metric: took 15.9729376s to configureAuth
	I0610 12:10:21.239632    4588 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:10:21.240051    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:10:21.240051    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:23.584229    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:23.584229    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:23.584318    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:26.362007    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:26.362153    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:26.368272    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:26.369078    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:26.369078    4588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:10:26.500066    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:10:26.500204    4588 buildroot.go:70] root file system type: tmpfs
	I0610 12:10:26.500502    4588 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:10:26.500502    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:28.830472    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:28.830822    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:28.830822    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:31.638236    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:31.638722    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:31.645248    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:31.645248    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:31.645990    4588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.159.171"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:10:31.817981    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.159.171
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:10:31.817981    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:34.157297    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:34.157297    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:34.157297    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:36.961294    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:36.962039    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:36.967778    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:36.968315    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:36.968475    4588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:10:39.155315    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:10:39.155315    4588 machine.go:97] duration metric: took 49.5093501s to provisionDockerMachine
	I0610 12:10:39.155315    4588 client.go:171] duration metric: took 2m4.9138483s to LocalClient.Create
	I0610 12:10:39.155867    4588 start.go:167] duration metric: took 2m4.9151413s to libmachine.API.Create "multinode-813300"
	I0610 12:10:39.155867    4588 start.go:293] postStartSetup for "multinode-813300-m02" (driver="hyperv")
	I0610 12:10:39.155986    4588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:10:39.168428    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:10:39.168428    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:41.493819    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:41.493819    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:41.493819    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:44.301123    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:44.301123    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:44.301723    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:10:44.414294    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2457575s)
	I0610 12:10:44.427480    4588 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:10:44.434767    4588 command_runner.go:130] > NAME=Buildroot
	I0610 12:10:44.434767    4588 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:10:44.434904    4588 command_runner.go:130] > ID=buildroot
	I0610 12:10:44.434904    4588 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:10:44.434904    4588 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:10:44.435037    4588 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:10:44.435068    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:10:44.435634    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:10:44.437223    4588 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:10:44.437223    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:10:44.450343    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:10:44.472867    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:10:44.524171    4588 start.go:296] duration metric: took 5.3682595s for postStartSetup
	I0610 12:10:44.527309    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:46.868202    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:46.868202    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:46.868202    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:49.582486    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:49.582486    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:49.583022    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:10:49.587441    4588 start.go:128] duration metric: took 2m15.3487158s to createHost
	I0610 12:10:49.587441    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:51.933279    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:51.933279    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:51.933844    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:54.672496    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:54.672834    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:54.677987    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:54.677987    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:54.678509    4588 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 12:10:54.806576    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718021454.812845033
	
	I0610 12:10:54.806642    4588 fix.go:216] guest clock: 1718021454.812845033
	I0610 12:10:54.806642    4588 fix.go:229] Guest: 2024-06-10 12:10:54.812845033 +0000 UTC Remote: 2024-06-10 12:10:49.587441 +0000 UTC m=+365.885567601 (delta=5.225404033s)
	I0610 12:10:54.806642    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:57.087646    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:57.087989    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:57.088094    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:59.860973    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:59.860973    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:59.866816    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:59.866884    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:59.866884    4588 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718021454
	I0610 12:11:00.015191    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:10:54 UTC 2024
	
	I0610 12:11:00.015191    4588 fix.go:236] clock set: Mon Jun 10 12:10:54 UTC 2024
	 (err=<nil>)
	I0610 12:11:00.015191    4588 start.go:83] releasing machines lock for "multinode-813300-m02", held for 2m25.7770525s
	I0610 12:11:00.015500    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:11:02.362997    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:02.362997    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:02.363073    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:05.203470    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:11:05.203551    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:05.208269    4588 out.go:177] * Found network options:
	I0610 12:11:05.211963    4588 out.go:177]   - NO_PROXY=172.17.159.171
	W0610 12:11:05.214531    4588 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:11:05.217146    4588 out.go:177]   - NO_PROXY=172.17.159.171
	W0610 12:11:05.219128    4588 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 12:11:05.221154    4588 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:11:05.223154    4588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:11:05.223154    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:11:05.233134    4588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:11:05.233134    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:11:07.621816    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:07.622943    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:10.545475    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:11:10.545604    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:10.546196    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:11:10.557804    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:11:10.557804    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:10.558804    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:11:10.655498    4588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0610 12:11:10.780338    4588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:11:10.780338    4588 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5571395s)
	I0610 12:11:10.780338    4588 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.5471587s)
	W0610 12:11:10.780338    4588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:11:10.792576    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:11:10.825526    4588 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:11:10.825771    4588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:11:10.825771    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:11:10.825771    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:11:10.868331    4588 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:11:10.886782    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:11:10.926185    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:11:10.951492    4588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:11:10.964107    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:11:10.998277    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:11:11.036407    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:11:11.071765    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:11:11.112069    4588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:11:11.147207    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:11:11.180467    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:11:11.213384    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:11:11.244518    4588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:11:11.263227    4588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:11:11.274302    4588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:11:11.307150    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:11.524102    4588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:11:11.560382    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:11:11.573859    4588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:11:11.598593    4588 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:11:11.598631    4588 command_runner.go:130] > [Unit]
	I0610 12:11:11.598631    4588 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:11:11.598668    4588 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:11:11.598668    4588 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:11:11.598668    4588 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:11:11.598668    4588 command_runner.go:130] > StartLimitBurst=3
	I0610 12:11:11.598668    4588 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:11:11.598727    4588 command_runner.go:130] > [Service]
	I0610 12:11:11.598727    4588 command_runner.go:130] > Type=notify
	I0610 12:11:11.598727    4588 command_runner.go:130] > Restart=on-failure
	I0610 12:11:11.598727    4588 command_runner.go:130] > Environment=NO_PROXY=172.17.159.171
	I0610 12:11:11.598727    4588 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:11:11.598727    4588 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:11:11.598863    4588 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:11:11.598863    4588 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:11:11.598863    4588 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:11:11.598863    4588 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:11:11.598863    4588 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:11:11.598963    4588 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:11:11.598963    4588 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:11:11.598963    4588 command_runner.go:130] > ExecStart=
	I0610 12:11:11.598963    4588 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:11:11.599028    4588 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:11:11.599028    4588 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:11:11.599028    4588 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:11:11.599028    4588 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:11:11.599028    4588 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:11:11.599028    4588 command_runner.go:130] > LimitCORE=infinity
	I0610 12:11:11.599028    4588 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:11:11.599028    4588 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:11:11.599140    4588 command_runner.go:130] > TasksMax=infinity
	I0610 12:11:11.599140    4588 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:11:11.599140    4588 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:11:11.599140    4588 command_runner.go:130] > Delegate=yes
	I0610 12:11:11.599140    4588 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:11:11.599140    4588 command_runner.go:130] > KillMode=process
	I0610 12:11:11.599140    4588 command_runner.go:130] > [Install]
	I0610 12:11:11.599140    4588 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:11:11.612843    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:11:11.652543    4588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:11:11.699581    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:11:11.738711    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:11:11.780078    4588 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:11:11.854242    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:11:11.887820    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:11:11.926828    4588 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:11:11.941661    4588 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:11:11.949084    4588 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:11:11.960762    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:11:11.987519    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:11:12.036700    4588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:11:12.255159    4588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:11:12.474321    4588 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:11:12.474461    4588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:11:12.521376    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:12.736988    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:11:15.281594    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5445856s)
	I0610 12:11:15.295747    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:11:15.337687    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:11:15.375551    4588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:11:15.617767    4588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:11:15.838434    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:16.049989    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:11:16.095406    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:11:16.132342    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:16.337717    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:11:16.465652    4588 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:11:16.479852    4588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:11:16.489205    4588 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:11:16.489286    4588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:11:16.489318    4588 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0610 12:11:16.489345    4588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:11:16.489345    4588 command_runner.go:130] > Access: 2024-06-10 12:11:16.374337285 +0000
	I0610 12:11:16.489394    4588 command_runner.go:130] > Modify: 2024-06-10 12:11:16.374337285 +0000
	I0610 12:11:16.489394    4588 command_runner.go:130] > Change: 2024-06-10 12:11:16.377337327 +0000
	I0610 12:11:16.489428    4588 command_runner.go:130] >  Birth: -
	I0610 12:11:16.489428    4588 start.go:562] Will wait 60s for crictl version
	I0610 12:11:16.501661    4588 ssh_runner.go:195] Run: which crictl
	I0610 12:11:16.508650    4588 command_runner.go:130] > /usr/bin/crictl
	I0610 12:11:16.522045    4588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:11:16.577734    4588 command_runner.go:130] > Version:  0.1.0
	I0610 12:11:16.577734    4588 command_runner.go:130] > RuntimeName:  docker
	I0610 12:11:16.577734    4588 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:11:16.577734    4588 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:11:16.577867    4588 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:11:16.586649    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:11:16.627174    4588 command_runner.go:130] > 26.1.4
	I0610 12:11:16.637565    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:11:16.672485    4588 command_runner.go:130] > 26.1.4
	I0610 12:11:16.677357    4588 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:11:16.680604    4588 out.go:177]   - env NO_PROXY=172.17.159.171
	I0610 12:11:16.682631    4588 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:11:16.690150    4588 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:11:16.690150    4588 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:11:16.703778    4588 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:11:16.711418    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:11:16.733435    4588 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:11:16.734138    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:11:16.734810    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:11:19.011757    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:19.012790    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:19.012790    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:11:19.013573    4588 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.151.128
	I0610 12:11:19.013573    4588 certs.go:194] generating shared ca certs ...
	I0610 12:11:19.013573    4588 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:11:19.013917    4588 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:11:19.014532    4588 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:11:19.014800    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:11:19.015170    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:11:19.015290    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:11:19.015688    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:11:19.016370    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:11:19.016618    4588 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:11:19.016812    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:11:19.017069    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:11:19.017245    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:11:19.017624    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:11:19.017944    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:11:19.017944    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.018393    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.018580    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.018708    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:11:19.074850    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:11:19.123648    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:11:19.175920    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:11:19.221951    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:11:19.276690    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:11:19.328081    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:11:19.391788    4588 ssh_runner.go:195] Run: openssl version
	I0610 12:11:19.402568    4588 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:11:19.420480    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:11:19.454097    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.461999    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.461999    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.475323    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.489426    4588 command_runner.go:130] > b5213941
	I0610 12:11:19.501484    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:11:19.534058    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:11:19.566004    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.572892    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.573207    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.584393    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.594218    4588 command_runner.go:130] > 51391683
	I0610 12:11:19.608435    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:11:19.641477    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:11:19.673326    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.680330    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.680882    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.692878    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.704044    4588 command_runner.go:130] > 3ec20f2e
	I0610 12:11:19.714906    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:11:19.746683    4588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:11:19.753164    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:11:19.753835    4588 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:11:19.753979    4588 kubeadm.go:928] updating node {m02 172.17.151.128 8443 v1.30.1 docker false true} ...
	I0610 12:11:19.753979    4588 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.151.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:11:19.766808    4588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:11:19.786670    4588 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0610 12:11:19.786670    4588 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 12:11:19.799248    4588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 12:11:19.819418    4588 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0610 12:11:19.819418    4588 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0610 12:11:19.820008    4588 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 12:11:19.820008    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 12:11:19.820186    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 12:11:19.837476    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:11:19.838584    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 12:11:19.841021    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 12:11:19.860269    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 12:11:19.860269    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 12:11:19.860899    4588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 12:11:19.860899    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 12:11:19.861094    4588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 12:11:19.861094    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 12:11:19.861150    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 12:11:19.875476    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 12:11:19.927216    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 12:11:19.928269    4588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 12:11:19.928622    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 12:11:21.395244    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0610 12:11:21.414600    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0610 12:11:21.454103    4588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:11:21.515630    4588 ssh_runner.go:195] Run: grep 172.17.159.171	control-plane.minikube.internal$ /etc/hosts
	I0610 12:11:21.522801    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:11:21.563217    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:21.775475    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:11:21.807974    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:11:21.808784    4588 start.go:316] joinCluster: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:11:21.808980    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 12:11:21.809040    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:11:24.214569    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:24.214569    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:24.215479    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:26.984919    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:11:26.984919    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:26.985727    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:11:27.193620    4588 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gf7439.c4abko5fnf4w17n8 --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 12:11:27.193620    4588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3845966s)
	I0610 12:11:27.193620    4588 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:11:27.193620    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gf7439.c4abko5fnf4w17n8 --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-813300-m02"
	I0610 12:11:27.412803    4588 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:11:29.260064    4588 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 12:11:29.260064    4588 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0610 12:11:29.260064    4588 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0610 12:11:29.260064    4588 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:11:29.260064    4588 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.502015791s
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0610 12:11:29.260185    4588 command_runner.go:130] > This node has joined the cluster:
	I0610 12:11:29.260185    4588 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0610 12:11:29.260185    4588 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0610 12:11:29.260185    4588 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0610 12:11:29.260185    4588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gf7439.c4abko5fnf4w17n8 --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-813300-m02": (2.0665485s)
	I0610 12:11:29.260308    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 12:11:29.477872    4588 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0610 12:11:29.694891    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-813300-m02 minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=multinode-813300 minikube.k8s.io/primary=false
	I0610 12:11:29.850112    4588 command_runner.go:130] > node/multinode-813300-m02 labeled
	I0610 12:11:29.850212    4588 start.go:318] duration metric: took 8.0413623s to joinCluster
	I0610 12:11:29.850367    4588 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:11:29.855200    4588 out.go:177] * Verifying Kubernetes components...
	I0610 12:11:29.851036    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:11:29.872060    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:30.101494    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:11:30.133140    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:11:30.133905    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:11:30.134653    4588 node_ready.go:35] waiting up to 6m0s for node "multinode-813300-m02" to be "Ready" ...
	I0610 12:11:30.135218    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:30.135218    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:30.135218    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:30.135218    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:30.154207    4588 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0610 12:11:30.154300    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:30 GMT
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Audit-Id: 120211c2-3f44-4da6-84af-a42103a0ca12
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:30.154300    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:30.154300    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:30.154462    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:30.640539    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:30.640539    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:30.640539    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:30.640539    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:30.648978    4588 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:11:30.648978    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:30.648978    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:30.648978    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:30 GMT
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Audit-Id: b18c775d-77ef-4caa-914c-7283fd55f1aa
	I0610 12:11:30.648978    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:31.145201    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:31.145282    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:31.145282    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:31.145282    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:31.152903    4588 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:11:31.152903    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:31.152903    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:31.152903    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:31 GMT
	I0610 12:11:31.152903    4588 round_trippers.go:580]     Audit-Id: 53a17888-1a8e-4851-8815-1bc758b4e0d1
	I0610 12:11:31.153005    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:31.153005    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:31.153005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:31.153005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:31.153133    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:31.642808    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:31.642895    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:31.642895    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:31.642895    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:31.646234    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:31.647170    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:31.647238    4588 round_trippers.go:580]     Audit-Id: 2c94ef73-ffa9-41c2-9f48-2d1eda7b40b0
	I0610 12:11:31.647238    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:31.647258    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:31.647258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:31.647258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:31.647258    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:31.647258    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:31 GMT
	I0610 12:11:31.647389    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:32.146589    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:32.146654    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:32.146654    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:32.146654    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:32.151245    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:32.151473    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:32.151473    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:32.151473    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:32 GMT
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Audit-Id: 02a02b92-b406-46fa-a89f-f11d3aa78b57
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:32.151619    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:32.152091    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:32.647908    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:32.647908    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:32.647908    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:32.647908    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:32.655278    4588 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:11:32.656309    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:32.656309    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:32.656309    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:32.656309    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:32.656381    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:32.656381    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:32.656381    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:32 GMT
	I0610 12:11:32.656381    4588 round_trippers.go:580]     Audit-Id: 8be91a38-9480-4dc6-bb32-e813479247b1
	I0610 12:11:32.656509    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:33.136161    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:33.136161    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:33.136161    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:33.136370    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:33.140480    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:33.140480    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:33.140480    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:33 GMT
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Audit-Id: 829ec5bb-9a54-441f-9a33-3fac4f603fda
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:33.140595    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:33.140677    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:33.649302    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:33.649302    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:33.649302    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:33.649302    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:33.653244    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:33.653244    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:33 GMT
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Audit-Id: f5522161-62a8-4be2-b191-8cee428580bd
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:33.653782    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:33.653782    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:33.653862    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:33.653862    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:34.140515    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:34.140774    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:34.140774    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:34.140774    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:34.144741    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:34.144836    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:34.144836    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:34.144917    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:34 GMT
	I0610 12:11:34.144998    4588 round_trippers.go:580]     Audit-Id: ffbf68f4-fcd8-46dd-aeb6-1bbbbe2cb644
	I0610 12:11:34.144998    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:34.144998    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:34.145028    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:34.145028    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:34.145028    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:34.641306    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:34.641355    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:34.641355    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:34.641395    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:34.648180    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:11:34.649068    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:34.649068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:34.649068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:34.649068    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:34.649068    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:34 GMT
	I0610 12:11:34.649158    4588 round_trippers.go:580]     Audit-Id: 3161a238-0ca8-4ad9-b851-e3ba727a1005
	I0610 12:11:34.649158    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:34.649158    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:34.649480    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:34.649960    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:35.141434    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:35.141434    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:35.141434    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:35.141544    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:35.144794    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:35.145459    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Audit-Id: 8bfb5db6-acd9-419a-a15c-52a9cae18cf4
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:35.145459    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:35.145459    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:35 GMT
	I0610 12:11:35.145647    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:35.649334    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:35.649334    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:35.649334    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:35.649334    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:35.654625    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:11:35.654625    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:35.654625    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:35.654625    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:35.654625    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:35.654625    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:35.654719    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:35 GMT
	I0610 12:11:35.654719    4588 round_trippers.go:580]     Audit-Id: 36583692-c8d0-4e9c-9ce6-c1c822dd5fa2
	I0610 12:11:35.654719    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:35.654755    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:36.140102    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:36.140102    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:36.140102    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:36.140102    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:36.143717    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:36.143988    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:36.143988    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:36.143988    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:36.143988    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:36 GMT
	I0610 12:11:36.143988    4588 round_trippers.go:580]     Audit-Id: 677c1be2-6b1f-4364-9375-811a12bc2d54
	I0610 12:11:36.144073    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:36.144073    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:36.144073    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:36.144299    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:36.647892    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:36.647892    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:36.647960    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:36.647960    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:36.652449    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:36.652449    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:36.653266    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:36.653266    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:36.653266    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:36.653266    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:36 GMT
	I0610 12:11:36.653367    4588 round_trippers.go:580]     Audit-Id: 0775cb60-f275-466b-beb7-fbd374a788eb
	I0610 12:11:36.653367    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:36.653367    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:36.653528    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:36.654008    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:37.140931    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:37.140931    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:37.140931    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:37.140931    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:37.145903    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:37.145903    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:37.145992    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:37.145992    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:37 GMT
	I0610 12:11:37.146069    4588 round_trippers.go:580]     Audit-Id: 0ae2f0c6-2a9b-45d0-a1d0-d6e366a1cda3
	I0610 12:11:37.146134    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:37.649232    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:37.649232    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:37.649232    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:37.649232    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:37.654247    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:11:37.654537    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:37.654537    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:37.654537    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:37 GMT
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Audit-Id: 6692a4c9-18ea-498b-9bac-d8956738e490
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:37.654750    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:38.140018    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:38.140097    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:38.140097    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:38.140097    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:38.143731    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:38.144482    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:38.144482    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:38.144482    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:38.144482    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:38.144569    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:38.144569    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:38.144569    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:38 GMT
	I0610 12:11:38.144569    4588 round_trippers.go:580]     Audit-Id: 8d344def-2d40-4c03-9670-8ae9d6a107b8
	I0610 12:11:38.144569    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:38.645605    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:38.645605    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:38.645605    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:38.645605    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:38.650198    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:38.650198    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:38.650198    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:38.650198    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:38.650198    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:38.650198    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:38 GMT
	I0610 12:11:38.650372    4588 round_trippers.go:580]     Audit-Id: 262d504f-c6bd-4fe3-8221-cde83d48b444
	I0610 12:11:38.650372    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:38.650372    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:38.650598    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:39.145556    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:39.145556    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:39.145556    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:39.145556    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:39.150540    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:39.151438    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:39.151438    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:39.151438    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:39 GMT
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Audit-Id: 97195732-aef3-4a63-8e27-d623b638c932
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:39.152316    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:39.152904    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:39.646188    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:39.646188    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:39.646188    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:39.646188    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:39.650273    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:39.650347    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:39.650347    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:39.650347    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:39 GMT
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Audit-Id: 63a802e5-f779-4df4-95b0-69698f33f890
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:39.650611    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:40.135464    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:40.135464    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:40.135464    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:40.135464    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:40.139465    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:40.139465    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:40 GMT
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Audit-Id: 3005cad6-5eb1-4e80-9df6-7f76602ade8f
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:40.140037    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:40.140037    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:40.140181    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:40.647037    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:40.647242    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:40.647242    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:40.647242    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:40.652362    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:11:40.652362    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:40.652362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:40.652362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:40 GMT
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Audit-Id: b703d6a1-f080-4fd1-a944-38afee287a18
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:40.652965    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:41.137147    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:41.137147    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:41.137147    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:41.137147    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:41.141766    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:41.141766    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:41.141766    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:41.141766    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:41 GMT
	I0610 12:11:41.141766    4588 round_trippers.go:580]     Audit-Id: 31939970-7805-4a89-9e76-a7fad299f03e
	I0610 12:11:41.142164    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:41.142164    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:41.142164    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:41.142304    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:41.644436    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:41.644493    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:41.644493    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:41.644493    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:41.648780    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:41.648780    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:41.648780    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:41.648780    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:41.648780    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:41.648780    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:41 GMT
	I0610 12:11:41.648780    4588 round_trippers.go:580]     Audit-Id: 0825d248-901f-4c1d-810e-5285b2152eed
	I0610 12:11:41.649725    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:41.649994    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:41.650452    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:42.136785    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:42.136785    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:42.136785    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:42.136785    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:42.140392    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:42.140392    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:42.140392    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:42.140392    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:42 GMT
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Audit-Id: 11b80fc3-7764-4796-b629-31a53e9d8efe
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:42.141123    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:42.646819    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:42.646819    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:42.646819    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:42.646819    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:42.651676    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:42.651676    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Audit-Id: 05161fa0-65a0-4dfa-9fce-c6366744f573
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:42.651676    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:42.651676    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:42 GMT
	I0610 12:11:42.652003    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:43.140233    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:43.140503    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:43.140503    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:43.140589    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:43.143984    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:43.143984    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:43.143984    4588 round_trippers.go:580]     Audit-Id: 048190cf-d8d4-4e7c-ad65-ba33997dd557
	I0610 12:11:43.144542    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:43.144542    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:43.144542    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:43.144542    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:43.144542    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:43 GMT
	I0610 12:11:43.144821    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:43.646980    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:43.646980    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:43.647093    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:43.647093    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:43.649867    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:43.650767    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:43.650767    4588 round_trippers.go:580]     Audit-Id: debea53e-3d89-46ce-9861-43438e7ef3fb
	I0610 12:11:43.650903    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:43.650903    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:43.650903    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:43.650903    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:43.650903    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:43 GMT
	I0610 12:11:43.650903    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:43.650903    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:44.141683    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:44.141759    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:44.141759    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:44.141759    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:44.146005    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:44.146005    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Audit-Id: fd0b413f-d703-4826-88f7-f92b964e7225
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:44.146005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:44.146005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:44 GMT
	I0610 12:11:44.146005    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:44.648434    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:44.648568    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:44.648568    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:44.648568    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:45.026766    4588 round_trippers.go:574] Response Status: 200 OK in 378 milliseconds
	I0610 12:11:45.026888    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Audit-Id: 2ffab90b-53ae-414a-a7af-dc244c1a0d38
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:45.026939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:45.026939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:45 GMT
	I0610 12:11:45.026939    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:45.150155    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:45.150155    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:45.150155    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:45.150155    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:45.154085    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:45.154085    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:45.154085    4588 round_trippers.go:580]     Audit-Id: 96ef9dbe-5664-4716-9850-3761e6347748
	I0610 12:11:45.154150    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:45.154150    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:45.154150    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:45.154150    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:45.154150    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:45 GMT
	I0610 12:11:45.154663    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:45.640479    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:45.640479    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:45.640479    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:45.640479    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:45.644051    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:45.644886    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:45.644886    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:45.644886    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:45 GMT
	I0610 12:11:45.644990    4588 round_trippers.go:580]     Audit-Id: 59a50b84-480f-4407-866c-91f7a741c38f
	I0610 12:11:45.645063    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:45.645140    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:45.645229    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:45.645297    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:46.144014    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:46.144073    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:46.144073    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:46.144073    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:46.147638    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:46.147638    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:46.147638    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:46.147638    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:46.147638    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:46 GMT
	I0610 12:11:46.147638    4588 round_trippers.go:580]     Audit-Id: ae143dec-a170-46f3-8120-7d6e3e03234a
	I0610 12:11:46.148117    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:46.148117    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:46.148620    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:46.148620    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:46.640820    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:46.640989    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:46.640989    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:46.641063    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:46.645172    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:46.645213    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Audit-Id: 847d5b54-5db6-4652-9704-c8c39063334c
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:46.645213    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:46.645213    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:46 GMT
	I0610 12:11:46.645213    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:47.141987    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:47.141987    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:47.141987    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:47.141987    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:47.145594    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:47.145594    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:47.145996    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:47.145996    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:47 GMT
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Audit-Id: 51c3741a-3779-4687-9675-ec8b78395d73
	I0610 12:11:47.146242    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:47.639611    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:47.639688    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:47.639688    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:47.639688    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:47.643746    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:47.643746    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:47.643746    4588 round_trippers.go:580]     Audit-Id: e150216b-0242-4c48-ba26-ceed233c4e9e
	I0610 12:11:47.643746    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:47.643877    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:47.643877    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:47.643877    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:47.643877    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:47 GMT
	I0610 12:11:47.644149    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:48.138285    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:48.138501    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:48.138501    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:48.138501    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:48.142963    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:48.142963    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:48.143652    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:48.143652    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:48 GMT
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Audit-Id: 1b862a76-f4a3-4be6-a4f2-bf278ed88005
	I0610 12:11:48.143747    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:48.650829    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:48.650909    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:48.650909    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:48.650909    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:48.660633    4588 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 12:11:48.660899    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Audit-Id: 17e5626d-5a6a-46d3-bc16-7e7057afeec3
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:48.660899    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:48.660899    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:48 GMT
	I0610 12:11:48.661433    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:48.661959    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:49.136114    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:49.136114    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:49.136114    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:49.136114    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:49.140691    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:49.140691    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Audit-Id: 0c770e35-ded7-43e1-876e-cb07a38fd2ec
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:49.141710    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:49.141710    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:49 GMT
	I0610 12:11:49.141900    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:49.649392    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:49.649667    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:49.649722    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:49.649722    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:49.656181    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:11:49.656181    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:49.656181    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:49.656181    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:49 GMT
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Audit-Id: f82a420f-5dd7-47d8-950d-49e3d39c7c47
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:49.656719    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:50.150676    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:50.150676    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:50.150676    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:50.150676    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:50.155265    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:50.155265    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:50.155265    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:50.155265    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:50 GMT
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Audit-Id: fef3067f-7dbf-4d79-bc69-c0238a7f6f1e
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:50.155735    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:50.649159    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:50.649159    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:50.649159    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:50.649159    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:50.653519    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:50.653519    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:50.653519    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:50.653519    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:50 GMT
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Audit-Id: 8688b0cf-3044-4665-8f85-fc7d50db907c
	I0610 12:11:50.653519    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:51.149572    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:51.149572    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.149572    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.149572    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.154215    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:51.154479    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.154479    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.154479    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Audit-Id: 212364d7-a337-45b2-9ccb-42587fa16fbd
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.154574    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:51.154574    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:51.636574    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:51.636574    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.636574    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.636574    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.648795    4588 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0610 12:11:51.648795    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.648795    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.648795    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.648795    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.648874    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.648874    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.648874    4588 round_trippers.go:580]     Audit-Id: 15cb6306-cb2e-42c9-90f9-f0ea78aa907e
	I0610 12:11:51.649046    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"640","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0610 12:11:51.649843    4588 node_ready.go:49] node "multinode-813300-m02" has status "Ready":"True"
	I0610 12:11:51.649913    4588 node_ready.go:38] duration metric: took 21.5150861s for node "multinode-813300-m02" to be "Ready" ...
	I0610 12:11:51.649913    4588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:11:51.649984    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:11:51.649984    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.649984    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.649984    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.658205    4588 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:11:51.658205    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.658205    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.658205    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Audit-Id: 4892c8a9-dc91-4772-83d2-aaf257434292
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.659421    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"640"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0610 12:11:51.663308    4588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.663308    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:11:51.663308    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.663308    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.663308    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.666480    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:51.666717    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.666717    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.666717    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Audit-Id: 29e5482f-5681-47f7-833b-ea8a2eaca847
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.666984    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0610 12:11:51.667673    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.667673    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.667673    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.667732    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.669455    4588 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 12:11:51.669455    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.669455    4588 round_trippers.go:580]     Audit-Id: bc194cc6-fd6f-420a-89b0-01f8d0a70bfd
	I0610 12:11:51.669455    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.670408    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.670408    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.670408    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.670408    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.670809    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.671358    4588 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.671358    4588 pod_ready.go:81] duration metric: took 8.0504ms for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.671358    4588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.671495    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:11:51.671592    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.671592    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.671657    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.673658    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.673658    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.673658    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.673658    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Audit-Id: 7b458228-14ae-4077-b82e-2cbe339be6a6
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.674781    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"e48af956-8533-4b8e-be5d-0834484cbffa","resourceVersion":"385","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.159.171:2379","kubernetes.io/config.hash":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.mirror":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.seen":"2024-06-10T12:08:00.781973961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0610 12:11:51.674781    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.675319    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.675319    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.675319    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.678378    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.678579    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.678579    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Audit-Id: 67628109-d0cf-4546-acc6-77a9b7f24051
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.678579    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.678984    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.678984    4588 pod_ready.go:92] pod "etcd-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.678984    4588 pod_ready.go:81] duration metric: took 7.6256ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.678984    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.678984    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:11:51.678984    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.679522    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.679522    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.681723    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.681723    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.682457    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Audit-Id: 006b6c27-a6c2-4581-9d6d-b3591452ff62
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.682457    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.682703    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"f824b391-b3d2-49ec-ba7d-863cb2150f81","resourceVersion":"386","creationTimestamp":"2024-06-10T12:07:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.159.171:8443","kubernetes.io/config.hash":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.mirror":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.seen":"2024-06-10T12:07:52.425003820Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:07:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0610 12:11:51.682824    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.682824    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.682824    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.682824    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.686165    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.686165    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Audit-Id: 1a7c9c37-ae20-4df4-9b97-f0c2a3dbc6bd
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.686272    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.686272    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.686558    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.687382    4588 pod_ready.go:92] pod "kube-apiserver-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.687439    4588 pod_ready.go:81] duration metric: took 8.4554ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.687516    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.687601    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:11:51.687601    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.687601    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.687601    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.690594    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.691080    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Audit-Id: 99614bca-e7d3-4d5a-bcd7-a928cb9b154e
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.691080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.691080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.691464    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"384","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0610 12:11:51.692144    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.692144    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.692144    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.692144    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.694634    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.694634    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.694634    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.694634    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.694984    4588 round_trippers.go:580]     Audit-Id: 32d4392b-f53e-46ab-be25-56be6d4cbf25
	I0610 12:11:51.694984    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.695078    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.695101    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.695358    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.695860    4588 pod_ready.go:92] pod "kube-controller-manager-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.695917    4588 pod_ready.go:81] duration metric: took 8.4006ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.695964    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.839454    4588 request.go:629] Waited for 143.1953ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:11:51.839923    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:11:51.839923    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.839923    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.839923    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.843515    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:51.843814    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.843814    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.843884    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.843884    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.843921    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.843921    4588 round_trippers.go:580]     Audit-Id: ae52edfd-adbd-41e2-9903-60b4ca215d9e
	I0610 12:11:51.843921    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.843921    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"380","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0610 12:11:52.037284    4588 request.go:629] Waited for 192.0358ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.037410    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.037470    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.037470    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.037470    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.041986    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:52.041986    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.041986    4588 round_trippers.go:580]     Audit-Id: 6f58beea-d4d9-4031-a26a-f0800096bfaa
	I0610 12:11:52.043065    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.043065    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.043065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.043065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.043065    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.043433    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:52.044120    4588 pod_ready.go:92] pod "kube-proxy-nrpvt" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:52.044181    4588 pod_ready.go:81] duration metric: took 348.2135ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.044181    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.249108    4588 request.go:629] Waited for 204.4773ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:11:52.249396    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:11:52.249396    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.249396    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.249396    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.253114    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:52.254189    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Audit-Id: 22ba6e39-243b-40db-98c8-3e627dba7115
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.254189    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.254189    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.254310    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rx2b2","generateName":"kube-proxy-","namespace":"kube-system","uid":"ce59a99b-a561-4598-9399-147f748433a2","resourceVersion":"622","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0610 12:11:52.451902    4588 request.go:629] Waited for 196.8687ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:52.452172    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:52.452172    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.452227    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.452227    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.456977    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:52.456977    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.456977    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.457882    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.457882    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.457882    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.457882    4588 round_trippers.go:580]     Audit-Id: 952f9251-dd4e-4d64-989c-68606172a0ae
	I0610 12:11:52.457882    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.458487    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"640","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0610 12:11:52.458526    4588 pod_ready.go:92] pod "kube-proxy-rx2b2" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:52.458526    4588 pod_ready.go:81] duration metric: took 414.2651ms for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.458526    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.638866    4588 request.go:629] Waited for 180.175ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:11:52.639129    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:11:52.639129    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.639129    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.639129    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.642844    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:52.642844    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Audit-Id: 812d93e6-be52-4acc-b0ac-ecbab159315b
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.642844    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.642844    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.643940    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"387","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0610 12:11:52.842848    4588 request.go:629] Waited for 197.3782ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.843029    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.843029    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.843029    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.843029    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.846380    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:52.846380    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.847068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Audit-Id: 4d4f8b3e-cb53-4801-94ee-6aeaebe31fb6
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.847068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.847544    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:52.848051    4588 pod_ready.go:92] pod "kube-scheduler-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:52.848114    4588 pod_ready.go:81] duration metric: took 389.5849ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.848114    4588 pod_ready.go:38] duration metric: took 1.1981912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:11:52.848184    4588 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:11:52.860356    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:11:52.887428    4588 system_svc.go:56] duration metric: took 38.3195ms WaitForService to wait for kubelet
	I0610 12:11:52.887428    4588 kubeadm.go:576] duration metric: took 23.0368067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:11:52.887492    4588 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:11:53.045346    4588 request.go:629] Waited for 157.5222ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes
	I0610 12:11:53.045433    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes
	I0610 12:11:53.045433    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:53.045527    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:53.045527    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:53.049939    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:53.049939    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:53.049939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:53.049939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:53 GMT
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Audit-Id: f303c0c3-82b7-4c72-b12a-228fca786f50
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:53.051319    4588 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"642"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9269 chars]
	I0610 12:11:53.051858    4588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:11:53.052041    4588 node_conditions.go:123] node cpu capacity is 2
	I0610 12:11:53.052041    4588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:11:53.052041    4588 node_conditions.go:123] node cpu capacity is 2
	I0610 12:11:53.052041    4588 node_conditions.go:105] duration metric: took 164.5477ms to run NodePressure ...
	I0610 12:11:53.052127    4588 start.go:240] waiting for startup goroutines ...
	I0610 12:11:53.052168    4588 start.go:254] writing updated cluster config ...
	I0610 12:11:53.067074    4588 ssh_runner.go:195] Run: rm -f paused
	I0610 12:11:53.212519    4588 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 12:11:53.217393    4588 out.go:177] * Done! kubectl is now configured to use "multinode-813300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.123513267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235169134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235268934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235298134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235560636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:08:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 12:08:31 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:08:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a0bc6043f7b92f091f4ceee7db3e11617072391c6e5303f4ecdafdb06d4b585a/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.730390719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.730618620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.730710821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.732556631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.765650908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.765730109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.765799609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.766004410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.303731826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.304019627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.304037527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.304223128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:20 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:12:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 12:12:21 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:12:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.074732018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.076936421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.077116521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.077673422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   51 seconds ago      Running             busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	f2e39052db195       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	d32ce22e31b06       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   a0bc6043f7b92       storage-provisioner
	c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              4 minutes ago       Running             kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	afad8b05897e5       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	bd1a6cd987430       a52dc94f0a912                                                                                         5 minutes ago       Running             kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	f1409bf44ff14       25a1387cdab82                                                                                         5 minutes ago       Running             kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	34b9299d74e38       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   a10e49596de5e       etcd-multinode-813300
	ba52603f83875       91be940803172                                                                                         5 minutes ago       Running             kube-apiserver            0                   c7d28a97ba1c4       kube-apiserver-multinode-813300
	
	
	==> coredns [f2e39052db19] <==
	[INFO] 10.244.1.2:46174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001048s
	[INFO] 10.244.0.3:52212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003513s
	[INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	[INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	[INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	[INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	[INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	[INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	[INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	[INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	[INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	[INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	[INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	[INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	[INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	[INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	[INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	[INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	[INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	[INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	[INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	[INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	[INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	[INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	[INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	
	
	==> describe nodes <==
	Name:               multinode-813300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-813300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-813300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-813300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:13:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:12:36 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:12:36 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:12:36 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:12:36 +0000   Mon, 10 Jun 2024 12:08:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.159.171
	  Hostname:    multinode-813300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 04dc333273774adc9b2cebbeee4c799a
	  System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	  Boot ID:                    c2d6ffa5-8803-4682-946d-e778abe2b7af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-z28tq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m58s
	  kube-system                 etcd-multinode-813300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kindnet-29gbv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m59s
	  kube-system                 kube-apiserver-multinode-813300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-controller-manager-multinode-813300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-proxy-nrpvt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-scheduler-multinode-813300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m55s  kube-proxy       
	  Normal  Starting                 5m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m13s  kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s  kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s  kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m59s  node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	  Normal  NodeReady                4m43s  kubelet          Node multinode-813300 status is now: NodeReady
	
	
	Name:               multinode-813300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-813300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-813300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-813300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:13:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:12:29 +0000   Mon, 10 Jun 2024 12:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:12:29 +0000   Mon, 10 Jun 2024 12:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:12:29 +0000   Mon, 10 Jun 2024 12:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:12:29 +0000   Mon, 10 Jun 2024 12:11:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.151.128
	  Hostname:    multinode-813300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	  System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	  Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-czxmt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kindnet-r4nfq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      105s
	  kube-system                 kube-proxy-rx2b2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  105s (x2 over 105s)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x2 over 105s)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x2 over 105s)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           104s                 node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	  Normal  NodeReady                82s                  kubelet          Node multinode-813300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.208733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun10 12:06] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.196226] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Jun10 12:07] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.123164] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.597831] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.216475] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.252946] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +2.841084] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.239357] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.201793] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.312951] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[ +11.774213] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.120592] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.210672] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.442980] systemd-fstab-generator[1714]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.582828] systemd-fstab-generator[2127]: Ignoring "noauto" option for root device
	[Jun10 12:08] kauditd_printk_skb: 62 callbacks suppressed
	[ +15.292472] systemd-fstab-generator[2331]: Ignoring "noauto" option for root device
	[  +0.227353] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.603365] kauditd_printk_skb: 51 callbacks suppressed
	[Jun10 12:12] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [34b9299d74e3] <==
	{"level":"info","ts":"2024-06-10T12:07:55.148908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-10T12:07:55.149046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgPreVoteResp from 8f4442f54c46fb8d at term 1"}
	{"level":"info","ts":"2024-06-10T12:07:55.149074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became candidate at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.149189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgVoteResp from 8f4442f54c46fb8d at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.14921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.149221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.156121Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.159.171:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T12:07:55.159001Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.159829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T12:07:55.160871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T12:07:55.163364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T12:07:55.165819Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T12:07:55.166021Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.166252Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.166441Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.168652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.159.171:2379"}
	{"level":"info","ts":"2024-06-10T12:07:55.184009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T12:07:57.986982Z","caller":"traceutil/trace.go:171","msg":"trace[314319298] transaction","detail":"{read_only:false; response_revision:57; number_of_response:1; }","duration":"175.967496ms","start":"2024-06-10T12:07:57.811Z","end":"2024-06-10T12:07:57.986968Z","steps":["trace[314319298] 'process raft request'  (duration: 175.915395ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:07:57.985692Z","caller":"traceutil/trace.go:171","msg":"trace[688595595] transaction","detail":"{read_only:false; response_revision:56; number_of_response:1; }","duration":"176.678005ms","start":"2024-06-10T12:07:57.808997Z","end":"2024-06-10T12:07:57.985675Z","steps":["trace[688595595] 'process raft request'  (duration: 167.851999ms)"],"step_count":1}
	2024/06/10 12:08:00 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-10T12:11:45.034472Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.434792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-813300-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-06-10T12:11:45.034652Z","caller":"traceutil/trace.go:171","msg":"trace[1392918931] range","detail":"{range_begin:/registry/minions/multinode-813300-m02; range_end:; response_count:1; response_revision:627; }","duration":"372.686393ms","start":"2024-06-10T12:11:44.66195Z","end":"2024-06-10T12:11:45.034637Z","steps":["trace[1392918931] 'range keys from in-memory index tree'  (duration: 372.300191ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:11:45.034806Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T12:11:44.661936Z","time spent":"372.859294ms","remote":"127.0.0.1:55574","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3173,"request content":"key:\"/registry/minions/multinode-813300-m02\" "}
	{"level":"warn","ts":"2024-06-10T12:11:45.03612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.337283ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18126302413705664155 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-813300\" mod_revision:611 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-813300\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-813300\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-10T12:11:45.038666Z","caller":"traceutil/trace.go:171","msg":"trace[807238633] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"254.838757ms","start":"2024-06-10T12:11:44.783815Z","end":"2024-06-10T12:11:45.038654Z","steps":["trace[807238633] 'process raft request'  (duration: 57.529761ms)","trace[807238633] 'compare'  (duration: 193.138277ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:13:13 up 7 min,  0 users,  load average: 0.35, 0.29, 0.14
	Linux multinode-813300 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c39d54960e7d] <==
	I0610 12:12:05.820726       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:12:15.834243       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:12:15.834351       1 main.go:227] handling current node
	I0610 12:12:15.834367       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:12:15.834375       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:12:25.843442       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:12:25.843527       1 main.go:227] handling current node
	I0610 12:12:25.843542       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:12:25.843549       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:12:35.849505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:12:35.849610       1 main.go:227] handling current node
	I0610 12:12:35.849625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:12:35.849633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:12:45.866099       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:12:45.866152       1 main.go:227] handling current node
	I0610 12:12:45.866170       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:12:45.866178       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:12:55.883210       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:12:55.883426       1 main.go:227] handling current node
	I0610 12:12:55.883562       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:12:55.883686       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:13:05.893577       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:13:05.893734       1 main.go:227] handling current node
	I0610 12:13:05.893787       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:13:05.893797       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ba52603f8387] <==
	I0610 12:07:59.824973       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0610 12:07:59.841370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.159.171]
	I0610 12:07:59.843233       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:07:59.851566       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:08:00.422415       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0610 12:08:00.612432       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0610 12:08:00.612551       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0610 12:08:00.612582       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.8µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0610 12:08:00.613710       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0610 12:08:00.614096       1 timeout.go:142] post-timeout activity - time-elapsed: 1.826019ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0610 12:08:00.723908       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:08:00.768391       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0610 12:08:00.811944       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:08:14.681862       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0610 12:08:15.551635       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0610 12:12:25.854015       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62544: use of closed network connection
	E0610 12:12:26.395729       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62547: use of closed network connection
	E0610 12:12:27.123198       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62549: use of closed network connection
	E0610 12:12:27.655576       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62551: use of closed network connection
	E0610 12:12:28.202693       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62554: use of closed network connection
	E0610 12:12:28.742674       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62556: use of closed network connection
	E0610 12:12:29.738951       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62559: use of closed network connection
	E0610 12:12:40.298395       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62561: use of closed network connection
	E0610 12:12:40.800091       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62563: use of closed network connection
	E0610 12:12:51.330500       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62566: use of closed network connection
	
	
	==> kube-controller-manager [f1409bf44ff1] <==
	I0610 12:08:16.024148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.478301ms"
	I0610 12:08:16.151441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.784808ms"
	I0610 12:08:16.151859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="288.402µs"
	I0610 12:08:16.577624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.03545ms"
	I0610 12:08:16.593339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.556101ms"
	I0610 12:08:16.593508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.3µs"
	I0610 12:08:30.535681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130µs"
	I0610 12:08:30.566310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.4µs"
	I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	
	
	==> kube-proxy [afad8b05897e] <==
	I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd1a6cd98743] <==
	W0610 12:07:58.426795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:09:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:09:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:10:00 multinode-813300 kubelet[2134]: E0610 12:10:00.916679    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:10:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:10:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:10:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:10:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:11:00 multinode-813300 kubelet[2134]: E0610 12:11:00.916892    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:11:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:11:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:11:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:11:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:12:00 multinode-813300 kubelet[2134]: E0610 12:12:00.916115    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:12:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:12:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:12:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:12:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:12:19 multinode-813300 kubelet[2134]: I0610 12:12:19.781320    2134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=235.78130106 podStartE2EDuration="3m55.78130106s" podCreationTimestamp="2024-06-10 12:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:08:32.625524206 +0000 UTC m=+31.981921907" watchObservedRunningTime="2024-06-10 12:12:19.78130106 +0000 UTC m=+259.137698761"
	Jun 10 12:12:19 multinode-813300 kubelet[2134]: I0610 12:12:19.782441    2134 topology_manager.go:215] "Topology Admit Handler" podUID="3191c71a-8c87-4390-8232-8653f494d1f0" podNamespace="default" podName="busybox-fc5497c4f-z28tq"
	Jun 10 12:12:19 multinode-813300 kubelet[2134]: I0610 12:12:19.915298    2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkl2j\" (UniqueName: \"kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j\") pod \"busybox-fc5497c4f-z28tq\" (UID: \"3191c71a-8c87-4390-8232-8653f494d1f0\") " pod="default/busybox-fc5497c4f-z28tq"
	Jun 10 12:13:00 multinode-813300 kubelet[2134]: E0610 12:13:00.916013    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:13:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:13:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:13:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:13:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:13:04.503563    1656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-813300 -n multinode-813300
E0610 12:13:17.586822    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-813300 -n multinode-813300: (13.0457293s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-813300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (59.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (266.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-813300 -v 3 --alsologtostderr
E0610 12:14:41.881210    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-813300 -v 3 --alsologtostderr: exit status 90 (3m48.3673466s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-813300 as [worker]
	* Starting "multinode-813300-m03" worker node in "multinode-813300" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:13:28.643449     272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0610 12:13:28.734440     272 out.go:291] Setting OutFile to fd 380 ...
	I0610 12:13:28.734679     272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:13:28.734679     272 out.go:304] Setting ErrFile to fd 612...
	I0610 12:13:28.734679     272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:13:28.748452     272 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:13:28.749519     272 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:13:28.750330     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:13:31.079441     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:13:31.079441     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:31.079441     272 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:13:31.079441     272 api_server.go:166] Checking apiserver status ...
	I0610 12:13:31.099448     272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:13:31.099732     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:13:33.434707     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:13:33.434707     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:33.434707     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:13:36.167894     272 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:13:36.168828     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:36.168828     272 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:13:36.294542     272 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.1950518s)
	I0610 12:13:36.305383     272 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup
	W0610 12:13:36.328264     272 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 12:13:36.341279     272 ssh_runner.go:195] Run: ls
	I0610 12:13:36.348018     272 api_server.go:253] Checking apiserver healthz at https://172.17.159.171:8443/healthz ...
	I0610 12:13:36.356621     272 api_server.go:279] https://172.17.159.171:8443/healthz returned 200:
	ok
	I0610 12:13:36.360636     272 out.go:177] * Adding node m03 to cluster multinode-813300 as [worker]
	I0610 12:13:36.364692     272 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:13:36.365222     272 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:13:36.365556     272 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:13:36.370501     272 out.go:177] * Starting "multinode-813300-m03" worker node in "multinode-813300" cluster
	I0610 12:13:36.373334     272 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:13:36.373334     272 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 12:13:36.373334     272 cache.go:56] Caching tarball of preloaded images
	I0610 12:13:36.373334     272 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:13:36.374439     272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:13:36.374719     272 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:13:36.382009     272 start.go:360] acquireMachinesLock for multinode-813300-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:13:36.382009     272 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300-m03"
	I0610 12:13:36.382740     272 start.go:93] Provisioning new machine with config: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0610 12:13:36.382953     272 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0610 12:13:36.386004     272 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 12:13:36.387192     272 start.go:159] libmachine.API.Create for "multinode-813300" (driver="hyperv")
	I0610 12:13:36.387306     272 client.go:168] LocalClient.Create starting
	I0610 12:13:36.387451     272 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 12:13:36.387451     272 main.go:141] libmachine: Decoding PEM data...
	I0610 12:13:36.387958     272 main.go:141] libmachine: Parsing certificate...
	I0610 12:13:36.388157     272 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 12:13:36.388343     272 main.go:141] libmachine: Decoding PEM data...
	I0610 12:13:36.388343     272 main.go:141] libmachine: Parsing certificate...
	I0610 12:13:36.388502     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 12:13:38.482228     272 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 12:13:38.482228     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:38.482309     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 12:13:40.343729     272 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 12:13:40.344751     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:40.344845     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:13:41.966951     272 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:13:41.966951     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:41.966951     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:13:46.123443     272 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:13:46.123443     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:46.126157     272 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:13:46.563593     272 main.go:141] libmachine: Creating SSH key...
	I0610 12:13:46.825144     272 main.go:141] libmachine: Creating VM...
	I0610 12:13:46.825144     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:13:50.020028     272 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:13:50.021014     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:50.021014     272 main.go:141] libmachine: Using switch "Default Switch"
	I0610 12:13:50.021014     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:13:51.891347     272 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:13:51.891646     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:51.891646     272 main.go:141] libmachine: Creating VHD
	I0610 12:13:51.891810     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 12:13:55.909407     272 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3B84FB95-EBC4-4A90-AFE8-688D44C6244D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 12:13:55.909407     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:55.909407     272 main.go:141] libmachine: Writing magic tar header
	I0610 12:13:55.910305     272 main.go:141] libmachine: Writing SSH key tar header
	I0610 12:13:55.921273     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 12:13:59.258334     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:13:59.258334     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:13:59.259391     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\disk.vhd' -SizeBytes 20000MB
	I0610 12:14:01.986417     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:01.986417     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:01.987488     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-813300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 12:14:05.930689     272 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-813300-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 12:14:05.930689     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:05.931593     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-813300-m03 -DynamicMemoryEnabled $false
	I0610 12:14:08.362942     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:08.363783     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:08.363783     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-813300-m03 -Count 2
	I0610 12:14:10.742868     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:10.742927     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:10.742927     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-813300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\boot2docker.iso'
	I0610 12:14:13.622949     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:13.622949     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:13.623330     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-813300-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\disk.vhd'
	I0610 12:14:16.563262     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:16.563262     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:16.563262     272 main.go:141] libmachine: Starting VM...
	I0610 12:14:16.563761     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300-m03
	I0610 12:14:19.872276     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:19.873145     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:19.873145     272 main.go:141] libmachine: Waiting for host to start...
	I0610 12:14:19.873145     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:14:22.400828     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:14:22.401096     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:22.401147     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:14:25.131562     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:25.131562     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:26.133628     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:14:28.531566     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:14:28.531932     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:28.532041     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:14:31.295237     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:31.295237     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:32.305444     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:14:34.705555     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:14:34.706154     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:34.706209     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:14:37.462458     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:37.462702     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:38.474378     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:14:40.886889     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:14:40.886889     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:40.887135     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:14:43.735716     272 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:14:43.738416     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:44.741759     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:14:47.166673     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:14:47.166673     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:47.166673     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:14:50.013139     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:14:50.013139     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:50.014003     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:14:52.332479     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:14:52.332479     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:52.333313     272 machine.go:94] provisionDockerMachine start ...
	I0610 12:14:52.333387     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:14:54.640283     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:14:54.640283     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:54.640406     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:14:57.416427     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:14:57.416427     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:57.423497     272 main.go:141] libmachine: Using SSH client type: native
	I0610 12:14:57.435518     272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.156.194 22 <nil> <nil>}
	I0610 12:14:57.435518     272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:14:57.573085     272 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:14:57.573085     272 buildroot.go:166] provisioning hostname "multinode-813300-m03"
	I0610 12:14:57.573215     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:14:59.862086     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:14:59.862749     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:14:59.862817     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:02.595120     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:02.595120     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:02.601395     272 main.go:141] libmachine: Using SSH client type: native
	I0610 12:15:02.601687     272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.156.194 22 <nil> <nil>}
	I0610 12:15:02.601687     272 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300-m03 && echo "multinode-813300-m03" | sudo tee /etc/hostname
	I0610 12:15:02.754094     272 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300-m03
	
	I0610 12:15:02.754253     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:05.060202     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:05.060637     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:05.060739     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:07.829787     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:07.829787     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:07.838097     272 main.go:141] libmachine: Using SSH client type: native
	I0610 12:15:07.838097     272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.156.194 22 <nil> <nil>}
	I0610 12:15:07.838644     272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:15:07.981855     272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:15:07.981855     272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:15:07.981855     272 buildroot.go:174] setting up certificates
	I0610 12:15:07.981855     272 provision.go:84] configureAuth start
	I0610 12:15:07.981855     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:10.354126     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:10.354337     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:10.354337     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:13.167279     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:13.167936     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:13.167997     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:15.488165     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:15.488165     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:15.488517     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:18.233850     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:18.233850     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:18.233924     272 provision.go:143] copyHostCerts
	I0610 12:15:18.234396     272 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:15:18.234421     272 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:15:18.234421     272 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:15:18.236217     272 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:15:18.236217     272 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:15:18.237090     272 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:15:18.238331     272 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:15:18.238331     272 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:15:18.238934     272 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:15:18.239511     272 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300-m03 san=[127.0.0.1 172.17.156.194 localhost minikube multinode-813300-m03]
	I0610 12:15:18.380547     272 provision.go:177] copyRemoteCerts
	I0610 12:15:18.399839     272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:15:18.399839     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:20.678312     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:20.678312     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:20.678312     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:23.449065     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:23.450202     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:23.450985     272 sshutil.go:53] new ssh client: &{IP:172.17.156.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\id_rsa Username:docker}
	I0610 12:15:23.564740     272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1648589s)
	I0610 12:15:23.564992     272 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:15:23.621179     272 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 12:15:23.669643     272 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 12:15:23.718886     272 provision.go:87] duration metric: took 15.7368097s to configureAuth
	I0610 12:15:23.718886     272 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:15:23.719616     272 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:15:23.719688     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:26.072803     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:26.072875     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:26.073191     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:28.897031     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:28.897132     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:28.903821     272 main.go:141] libmachine: Using SSH client type: native
	I0610 12:15:28.904038     272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.156.194 22 <nil> <nil>}
	I0610 12:15:28.904038     272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:15:29.036758     272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:15:29.036758     272 buildroot.go:70] root file system type: tmpfs
	I0610 12:15:29.037064     272 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:15:29.037128     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:31.421697     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:31.421776     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:31.421922     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:34.227085     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:34.227085     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:34.235094     272 main.go:141] libmachine: Using SSH client type: native
	I0610 12:15:34.235275     272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.156.194 22 <nil> <nil>}
	I0610 12:15:34.235818     272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:15:34.394032     272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:15:34.394290     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:36.729715     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:36.730254     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:36.730335     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:39.546409     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:39.546409     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:39.555105     272 main.go:141] libmachine: Using SSH client type: native
	I0610 12:15:39.555899     272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.156.194 22 <nil> <nil>}
	I0610 12:15:39.555899     272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:15:41.782324     272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:15:41.782324     272 machine.go:97] duration metric: took 49.4486101s to provisionDockerMachine
	I0610 12:15:41.782324     272 client.go:171] duration metric: took 2m5.3939694s to LocalClient.Create
	I0610 12:15:41.782324     272 start.go:167] duration metric: took 2m5.3941161s to libmachine.API.Create "multinode-813300"
	I0610 12:15:41.782324     272 start.go:293] postStartSetup for "multinode-813300-m03" (driver="hyperv")
	I0610 12:15:41.782324     272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:15:41.799210     272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:15:41.799210     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:44.228663     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:44.228663     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:44.229345     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:46.976494     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:46.976494     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:46.977208     272 sshutil.go:53] new ssh client: &{IP:172.17.156.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\id_rsa Username:docker}
	I0610 12:15:47.080424     272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2811718s)
	I0610 12:15:47.093357     272 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:15:47.099970     272 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:15:47.099970     272 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:15:47.100921     272 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:15:47.101224     272 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:15:47.116493     272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:15:47.139458     272 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:15:47.190935     272 start.go:296] duration metric: took 5.4085676s for postStartSetup
	I0610 12:15:47.194381     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:49.566810     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:49.567521     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:49.567580     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:52.427212     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:52.427840     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:52.427893     272 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:15:52.431554     272 start.go:128] duration metric: took 2m16.0474985s to createHost
	I0610 12:15:52.431554     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:15:54.807904     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:15:54.807904     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:54.808033     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:15:57.573933     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:15:57.573933     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:15:57.579639     272 main.go:141] libmachine: Using SSH client type: native
	I0610 12:15:57.579639     272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.156.194 22 <nil> <nil>}
	I0610 12:15:57.579639     272 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 12:15:57.704963     272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718021757.713661082
	
	I0610 12:15:57.704963     272 fix.go:216] guest clock: 1718021757.713661082
	I0610 12:15:57.704963     272 fix.go:229] Guest: 2024-06-10 12:15:57.713661082 +0000 UTC Remote: 2024-06-10 12:15:52.431554 +0000 UTC m=+143.874294601 (delta=5.282107082s)
	I0610 12:15:57.705145     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:16:00.024868     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:16:00.024868     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:16:00.024997     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:16:02.816548     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:16:02.816548     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:16:02.821492     272 main.go:141] libmachine: Using SSH client type: native
	I0610 12:16:02.821963     272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.156.194 22 <nil> <nil>}
	I0610 12:16:02.821963     272 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718021757
	I0610 12:16:02.958544     272 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:15:57 UTC 2024
	
	I0610 12:16:02.958544     272 fix.go:236] clock set: Mon Jun 10 12:15:57 UTC 2024
	 (err=<nil>)
	I0610 12:16:02.958544     272 start.go:83] releasing machines lock for "multinode-813300-m03", held for 2m26.5753478s
	I0610 12:16:02.958544     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:16:05.313876     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:16:05.313876     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:16:05.314887     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:16:08.088492     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:16:08.088492     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:16:08.093127     272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:16:08.093296     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:16:08.104544     272 ssh_runner.go:195] Run: systemctl --version
	I0610 12:16:08.104544     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:16:10.476987     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:16:10.476987     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:16:10.477079     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:16:10.502869     272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:16:10.503809     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:16:10.503809     272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:16:13.442708     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:16:13.442708     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:16:13.443085     272 sshutil.go:53] new ssh client: &{IP:172.17.156.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\id_rsa Username:docker}
	I0610 12:16:13.462267     272 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:16:13.462326     272 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:16:13.462857     272 sshutil.go:53] new ssh client: &{IP:172.17.156.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\id_rsa Username:docker}
	I0610 12:16:13.539747     272 ssh_runner.go:235] Completed: systemctl --version: (5.4351584s)
	I0610 12:16:13.555750     272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 12:16:13.655530     272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:16:13.655675     272 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5622574s)
	I0610 12:16:13.667828     272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:16:13.700074     272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:16:13.700186     272 start.go:494] detecting cgroup driver to use...
	I0610 12:16:13.700438     272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:16:13.752477     272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:16:13.790687     272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:16:13.812302     272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:16:13.824400     272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:16:13.857548     272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:16:13.893373     272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:16:13.927819     272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:16:13.960657     272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:16:13.996864     272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:16:14.032616     272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:16:14.069186     272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:16:14.110623     272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:16:14.144488     272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:16:14.178142     272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:16:14.402248     272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:16:14.439696     272 start.go:494] detecting cgroup driver to use...
	I0610 12:16:14.456728     272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:16:14.499768     272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:16:14.543946     272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:16:14.604485     272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:16:14.645027     272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:16:14.688519     272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:16:14.754522     272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:16:14.782882     272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:16:14.837120     272 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:16:14.856570     272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:16:14.878291     272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:16:14.928153     272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:16:15.156804     272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:16:15.364918     272 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:16:15.365190     272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:16:15.416281     272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:16:15.630657     272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:17:16.773719     272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1425752s)
	I0610 12:17:16.787693     272 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0610 12:17:16.825772     272 out.go:177] 
	W0610 12:17:16.827974     272 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 10 12:15:40 multinode-813300-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 10 12:15:40 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:40.228343477Z" level=info msg="Starting up"
	Jun 10 12:15:40 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:40.229295698Z" level=info msg="containerd not running, starting managed containerd"
	Jun 10 12:15:40 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:40.230479823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=677
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.265031659Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298432471Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298537773Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298707377Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298729977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298807379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298939081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299145286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299280189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299297389Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299309289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299403291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299733498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303027969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303133871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303334875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303434477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303558880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303706283Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303837786Z" level=info msg="metadata content store policy set" policy=shared
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.388704994Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.388831497Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.388859697Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.389244906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.389594013Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.390424631Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391427152Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391713158Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391739859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391758459Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391846961Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391881562Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391899662Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391976164Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392006064Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392024465Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392042265Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392056866Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392084266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392103067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392118267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392135867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392205369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392226969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392241769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392263370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392279070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392300071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392315271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392330271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392345172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392362772Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392385973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392401473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392415273Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392731480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392785381Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392804681Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392820782Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392833482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392850282Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392864383Z" level=info msg="NRI interface is disabled by configuration."
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.393318792Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.393694800Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.393892305Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.394238012Z" level=info msg="containerd successfully booted in 0.130977s"
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.310512698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.343635560Z" level=info msg="Loading containers: start."
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.625440136Z" level=info msg="Loading containers: done."
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.653072312Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.653390120Z" level=info msg="Daemon has completed initialization"
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.788716131Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 10 12:15:41 multinode-813300-m03 systemd[1]: Started Docker Application Container Engine.
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.789411448Z" level=info msg="API listen on [::]:2376"
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.666482325Z" level=info msg="Processing signal 'terminated'"
	Jun 10 12:16:15 multinode-813300-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.668584035Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.669707440Z" level=info msg="Daemon shutdown complete"
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.669972241Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.670142342Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 10 12:16:16 multinode-813300-m03 systemd[1]: docker.service: Deactivated successfully.
	Jun 10 12:16:16 multinode-813300-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jun 10 12:16:16 multinode-813300-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 10 12:16:16 multinode-813300-m03 dockerd[1022]: time="2024-06-10T12:16:16.751419200Z" level=info msg="Starting up"
	Jun 10 12:17:16 multinode-813300-m03 dockerd[1022]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 10 12:17:16 multinode-813300-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 10 12:17:16 multinode-813300-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 10 12:17:16 multinode-813300-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 10 12:15:40 multinode-813300-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 10 12:15:40 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:40.228343477Z" level=info msg="Starting up"
	Jun 10 12:15:40 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:40.229295698Z" level=info msg="containerd not running, starting managed containerd"
	Jun 10 12:15:40 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:40.230479823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=677
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.265031659Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298432471Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298537773Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298707377Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298729977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298807379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.298939081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299145286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299280189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299297389Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299309289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299403291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.299733498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303027969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303133871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303334875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303434477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303558880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303706283Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.303837786Z" level=info msg="metadata content store policy set" policy=shared
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.388704994Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.388831497Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.388859697Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.389244906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.389594013Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.390424631Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391427152Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391713158Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391739859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391758459Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391846961Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391881562Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391899662Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.391976164Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392006064Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392024465Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392042265Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392056866Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392084266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392103067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392118267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392135867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392205369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392226969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392241769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392263370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392279070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392300071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392315271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392330271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392345172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392362772Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392385973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392401473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392415273Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392731480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392785381Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392804681Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392820782Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392833482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392850282Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.392864383Z" level=info msg="NRI interface is disabled by configuration."
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.393318792Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.393694800Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.393892305Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 10 12:15:40 multinode-813300-m03 dockerd[677]: time="2024-06-10T12:15:40.394238012Z" level=info msg="containerd successfully booted in 0.130977s"
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.310512698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.343635560Z" level=info msg="Loading containers: start."
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.625440136Z" level=info msg="Loading containers: done."
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.653072312Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.653390120Z" level=info msg="Daemon has completed initialization"
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.788716131Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 10 12:15:41 multinode-813300-m03 systemd[1]: Started Docker Application Container Engine.
	Jun 10 12:15:41 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:15:41.789411448Z" level=info msg="API listen on [::]:2376"
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.666482325Z" level=info msg="Processing signal 'terminated'"
	Jun 10 12:16:15 multinode-813300-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.668584035Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.669707440Z" level=info msg="Daemon shutdown complete"
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.669972241Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 10 12:16:15 multinode-813300-m03 dockerd[671]: time="2024-06-10T12:16:15.670142342Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 10 12:16:16 multinode-813300-m03 systemd[1]: docker.service: Deactivated successfully.
	Jun 10 12:16:16 multinode-813300-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jun 10 12:16:16 multinode-813300-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 10 12:16:16 multinode-813300-m03 dockerd[1022]: time="2024-06-10T12:16:16.751419200Z" level=info msg="Starting up"
	Jun 10 12:17:16 multinode-813300-m03 dockerd[1022]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 10 12:17:16 multinode-813300-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 10 12:17:16 multinode-813300-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 10 12:17:16 multinode-813300-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0610 12:17:16.828969     272 out.go:239] * 
	* 
	W0610 12:17:16.868846     272 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_8a500d2181d400fd32bfc5983efc601de14422c3_11.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_8a500d2181d400fd32bfc5983efc601de14422c3_11.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 12:17:16.871420     272 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-813300 -v 3 --alsologtostderr" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-813300 -n multinode-813300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-813300 -n multinode-813300: (13.3247929s)
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-813300 logs -n 25: (9.4548943s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-314000                           | mount-start-1-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:00 UTC | 10 Jun 24 12:01 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-314000 ssh -- ls                    | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:01 UTC | 10 Jun 24 12:01 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:01 UTC | 10 Jun 24 12:01 UTC |
	| start   | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:01 UTC | 10 Jun 24 12:03 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:03 UTC |                     |
	|         | --profile mount-start-2-314000 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-314000 ssh -- ls                    | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:03 UTC | 10 Jun 24 12:04 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:04 UTC |
	| delete  | -p mount-start-1-314000                           | mount-start-1-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:04 UTC |
	| start   | -p multinode-813300                               | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:11 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- apply -f                   | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- rollout                    | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC |                     |
	|         | busybox-fc5497c4f-czxmt -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC |                     |
	|         | busybox-fc5497c4f-z28tq -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1                         |                      |                   |         |                     |                     |
	| node    | add -p multinode-813300 -v 3                      | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:13 UTC |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 12:04:43
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 12:04:43.867977    4588 out.go:291] Setting OutFile to fd 712 ...
	I0610 12:04:43.868768    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:04:43.868768    4588 out.go:304] Setting ErrFile to fd 776...
	I0610 12:04:43.868768    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:04:43.892667    4588 out.go:298] Setting JSON to false
	I0610 12:04:43.895275    4588 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20972,"bootTime":1718000111,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 12:04:43.895275    4588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 12:04:43.900472    4588 out.go:177] * [multinode-813300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 12:04:43.904368    4588 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:04:43.904368    4588 notify.go:220] Checking for updates...
	I0610 12:04:43.909526    4588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:04:43.912565    4588 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 12:04:43.917533    4588 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:04:43.919941    4588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:04:43.923788    4588 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:04:43.924271    4588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:04:49.675599    4588 out.go:177] * Using the hyperv driver based on user configuration
	I0610 12:04:49.679131    4588 start.go:297] selected driver: hyperv
	I0610 12:04:49.679287    4588 start.go:901] validating driver "hyperv" against <nil>
	I0610 12:04:49.679287    4588 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:04:49.728962    4588 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 12:04:49.730655    4588 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:04:49.730655    4588 cni.go:84] Creating CNI manager for ""
	I0610 12:04:49.730655    4588 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 12:04:49.730655    4588 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 12:04:49.730655    4588 start.go:340] cluster config:
	{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:04:49.730655    4588 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:04:49.735782    4588 out.go:177] * Starting "multinode-813300" primary control-plane node in "multinode-813300" cluster
	I0610 12:04:49.737542    4588 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:04:49.738389    4588 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 12:04:49.738389    4588 cache.go:56] Caching tarball of preloaded images
	I0610 12:04:49.738521    4588 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:04:49.738973    4588 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:04:49.739157    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:04:49.739400    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json: {Name:mke1756b0f63dd0c0eff0216ad43e7c3fc903678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:04:49.740675    4588 start.go:360] acquireMachinesLock for multinode-813300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:04:49.740675    4588 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300"
	I0610 12:04:49.740675    4588 start.go:93] Provisioning new machine with config: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 12:04:49.740675    4588 start.go:125] createHost starting for "" (driver="hyperv")
	I0610 12:04:49.742990    4588 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 12:04:49.744068    4588 start.go:159] libmachine.API.Create for "multinode-813300" (driver="hyperv")
	I0610 12:04:49.744068    4588 client.go:168] LocalClient.Create starting
	I0610 12:04:49.744355    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 12:04:49.744355    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:04:49.745001    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:04:49.745251    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 12:04:49.745288    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:04:49.745537    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:04:49.745648    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 12:04:51.938878    4588 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 12:04:51.938878    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:51.939553    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 12:04:53.807457    4588 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 12:04:53.807457    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:53.808222    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:04:55.393412    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:04:55.393412    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:55.393412    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:04:59.273212    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:04:59.274143    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:59.276499    4588 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:04:59.786597    4588 main.go:141] libmachine: Creating SSH key...
	I0610 12:05:00.178242    4588 main.go:141] libmachine: Creating VM...
	I0610 12:05:00.178340    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:05:03.335727    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:05:03.335727    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:03.336442    4588 main.go:141] libmachine: Using switch "Default Switch"
	I0610 12:05:03.336442    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:05:05.206486    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:05:05.206839    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:05.206839    4588 main.go:141] libmachine: Creating VHD
	I0610 12:05:05.206938    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 12:05:09.220962    4588 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D79874B4-719D-480C-BEAA-32F87CD7D741
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 12:05:09.221783    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:09.221783    4588 main.go:141] libmachine: Writing magic tar header
	I0610 12:05:09.221873    4588 main.go:141] libmachine: Writing SSH key tar header
	I0610 12:05:09.231477    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 12:05:12.585103    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:12.585103    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:12.586033    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\disk.vhd' -SizeBytes 20000MB
	I0610 12:05:15.285675    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:15.285675    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:15.285962    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-813300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 12:05:19.111640    4588 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-813300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 12:05:19.111640    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:19.112222    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-813300 -DynamicMemoryEnabled $false
	I0610 12:05:21.531378    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:21.531378    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:21.531378    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-813300 -Count 2
	I0610 12:05:23.889725    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:23.889725    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:23.890596    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-813300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\boot2docker.iso'
	I0610 12:05:26.621094    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:26.621720    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:26.621781    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-813300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\disk.vhd'
	I0610 12:05:29.472370    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:29.472370    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:29.472370    4588 main.go:141] libmachine: Starting VM...
	I0610 12:05:29.473255    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300
	I0610 12:05:32.754805    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:32.754805    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:32.754805    4588 main.go:141] libmachine: Waiting for host to start...
	I0610 12:05:32.754805    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:35.217643    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:35.218086    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:35.218212    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:37.944028    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:37.944028    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:38.950550    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:41.379344    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:41.379344    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:41.380252    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:44.115382    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:44.115382    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:45.121347    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:47.512650    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:47.512650    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:47.513336    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:50.281297    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:50.281297    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:51.289490    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:53.673938    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:53.674570    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:53.674570    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:56.397148    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:56.398100    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:57.399811    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:59.797095    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:59.797152    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:59.797152    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:02.530578    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:02.530578    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:02.530897    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:04.770192    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:04.770234    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:04.770296    4588 machine.go:94] provisionDockerMachine start ...
	I0610 12:06:04.770296    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:07.058629    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:07.058629    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:07.059046    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:09.847341    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:09.848100    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:09.853806    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:09.864878    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:09.864878    4588 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:06:09.992682    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:06:09.992682    4588 buildroot.go:166] provisioning hostname "multinode-813300"
	I0610 12:06:09.992830    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:12.311800    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:12.311800    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:12.312418    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:15.048157    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:15.048157    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:15.055378    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:15.055541    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:15.055541    4588 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300 && echo "multinode-813300" | sudo tee /etc/hostname
	I0610 12:06:15.227442    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300
	
	I0610 12:06:15.227442    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:17.470385    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:17.470385    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:17.470748    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:20.178259    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:20.178259    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:20.185354    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:20.185738    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:20.185872    4588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:06:20.340364    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:06:20.340364    4588 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:06:20.340507    4588 buildroot.go:174] setting up certificates
	I0610 12:06:20.340593    4588 provision.go:84] configureAuth start
	I0610 12:06:20.340593    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:22.647449    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:22.647770    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:22.647870    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:25.365433    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:25.366134    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:25.366227    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:27.676201    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:27.677237    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:27.677302    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:30.462238    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:30.462450    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:30.462450    4588 provision.go:143] copyHostCerts
	I0610 12:06:30.462450    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:06:30.463207    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:06:30.463207    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:06:30.463939    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:06:30.464777    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:06:30.465582    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:06:30.465582    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:06:30.465582    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:06:30.466886    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:06:30.466886    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:06:30.466886    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:06:30.467429    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:06:30.467908    4588 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300 san=[127.0.0.1 172.17.159.171 localhost minikube multinode-813300]
	I0610 12:06:30.880090    4588 provision.go:177] copyRemoteCerts
	I0610 12:06:30.893142    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:06:30.893241    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:33.157947    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:33.158648    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:33.158648    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:35.872452    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:35.873367    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:35.873367    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:06:35.983936    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0907517s)
	I0610 12:06:35.984059    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:06:35.984539    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:06:36.037427    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:06:36.037713    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 12:06:36.087322    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:06:36.087855    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 12:06:36.138563    4588 provision.go:87] duration metric: took 15.7977809s to configureAuth
	I0610 12:06:36.138653    4588 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:06:36.138819    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:06:36.138819    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:38.411440    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:38.411440    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:38.411440    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:41.131406    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:41.131406    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:41.138066    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:41.138428    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:41.138428    4588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:06:41.270867    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:06:41.270942    4588 buildroot.go:70] root file system type: tmpfs
	I0610 12:06:41.271213    4588 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:06:41.271282    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:43.585535    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:43.585535    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:43.585535    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:46.334256    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:46.334341    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:46.340258    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:46.340937    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:46.340937    4588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:06:46.504832    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:06:46.505009    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:48.805219    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:48.806280    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:48.806423    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:51.509193    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:51.509586    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:51.514228    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:51.514228    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:51.514228    4588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:06:53.697279    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:06:53.697853    4588 machine.go:97] duration metric: took 48.9265831s to provisionDockerMachine
	I0610 12:06:53.697853    4588 client.go:171] duration metric: took 2m3.9527697s to LocalClient.Create
	I0610 12:06:53.698031    4588 start.go:167] duration metric: took 2m3.9529368s to libmachine.API.Create "multinode-813300"
	I0610 12:06:53.698085    4588 start.go:293] postStartSetup for "multinode-813300" (driver="hyperv")
	I0610 12:06:53.698115    4588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:06:53.710436    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:06:53.710436    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:55.966771    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:55.966771    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:55.966771    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:58.718421    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:58.718421    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:58.719167    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:06:58.827171    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1166938s)
	I0610 12:06:58.839755    4588 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:06:58.846848    4588 command_runner.go:130] > NAME=Buildroot
	I0610 12:06:58.846848    4588 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:06:58.846848    4588 command_runner.go:130] > ID=buildroot
	I0610 12:06:58.846848    4588 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:06:58.846848    4588 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:06:58.847038    4588 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:06:58.847038    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:06:58.847652    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:06:58.848877    4588 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:06:58.848877    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:06:58.861906    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:06:58.883111    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:06:58.930581    4588 start.go:296] duration metric: took 5.2324233s for postStartSetup
	I0610 12:06:58.932577    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:01.213042    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:01.214102    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:01.214102    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:03.953887    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:03.954621    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:03.954896    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:07:03.957997    4588 start.go:128] duration metric: took 2m14.216153s to createHost
	I0610 12:07:03.957997    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:06.232653    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:06.232653    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:06.232653    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:08.922879    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:08.922879    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:08.928691    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:07:08.928691    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:07:08.928691    4588 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 12:07:09.066125    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718021229.075627913
	
	I0610 12:07:09.066125    4588 fix.go:216] guest clock: 1718021229.075627913
	I0610 12:07:09.066125    4588 fix.go:229] Guest: 2024-06-10 12:07:09.075627913 +0000 UTC Remote: 2024-06-10 12:07:03.9579973 +0000 UTC m=+140.257965001 (delta=5.117630613s)
	I0610 12:07:09.066240    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:11.379014    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:11.379014    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:11.379357    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:14.163833    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:14.163833    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:14.170036    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:07:14.170200    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:07:14.170200    4588 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718021229
	I0610 12:07:14.308564    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:07:09 UTC 2024
	
	I0610 12:07:14.308564    4588 fix.go:236] clock set: Mon Jun 10 12:07:09 UTC 2024
	 (err=<nil>)
	I0610 12:07:14.308564    4588 start.go:83] releasing machines lock for "multinode-813300", held for 2m24.5667064s
	I0610 12:07:14.308728    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:16.583361    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:16.583361    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:16.583361    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:19.333520    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:19.334493    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:19.338942    4588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:07:19.339115    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:19.349878    4588 ssh_runner.go:195] Run: cat /version.json
	I0610 12:07:19.349878    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:21.705493    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:21.705493    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:21.705493    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:21.736050    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:21.736147    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:21.736191    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:24.564607    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:24.564844    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:24.564844    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:07:24.595261    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:24.595261    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:24.596193    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:07:24.730348    4588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:07:24.730492    4588 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3915062s)
	I0610 12:07:24.730492    4588 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 12:07:24.730492    4588 ssh_runner.go:235] Completed: cat /version.json: (5.3805704s)
	I0610 12:07:24.743901    4588 ssh_runner.go:195] Run: systemctl --version
	I0610 12:07:24.755276    4588 command_runner.go:130] > systemd 252 (252)
	I0610 12:07:24.755521    4588 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 12:07:24.768011    4588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:07:24.776306    4588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 12:07:24.777113    4588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:07:24.788496    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:07:24.821922    4588 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:07:24.822097    4588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:07:24.822097    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:07:24.822097    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:07:24.858836    4588 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:07:24.870754    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:07:24.906067    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:07:24.927089    4588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:07:24.939539    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:07:24.975868    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:07:25.012044    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:07:25.051040    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:07:25.093321    4588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:07:25.128698    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:07:25.161844    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:07:25.194094    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:07:25.228546    4588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:07:25.253020    4588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:07:25.266396    4588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:07:25.300773    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:25.529366    4588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:07:25.568641    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:07:25.581890    4588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:07:25.609889    4588 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:07:25.610189    4588 command_runner.go:130] > [Unit]
	I0610 12:07:25.610189    4588 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:07:25.610189    4588 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:07:25.610189    4588 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:07:25.610264    4588 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:07:25.610264    4588 command_runner.go:130] > StartLimitBurst=3
	I0610 12:07:25.610264    4588 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:07:25.610264    4588 command_runner.go:130] > [Service]
	I0610 12:07:25.610323    4588 command_runner.go:130] > Type=notify
	I0610 12:07:25.610323    4588 command_runner.go:130] > Restart=on-failure
	I0610 12:07:25.610323    4588 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:07:25.610381    4588 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:07:25.610381    4588 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:07:25.610381    4588 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:07:25.610460    4588 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:07:25.610460    4588 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:07:25.610460    4588 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:07:25.610541    4588 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:07:25.610541    4588 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:07:25.610541    4588 command_runner.go:130] > ExecStart=
	I0610 12:07:25.610541    4588 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:07:25.610727    4588 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:07:25.610787    4588 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:07:25.610787    4588 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:07:25.610787    4588 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > LimitCORE=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:07:25.610845    4588 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:07:25.610845    4588 command_runner.go:130] > TasksMax=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:07:25.610922    4588 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:07:25.610922    4588 command_runner.go:130] > Delegate=yes
	I0610 12:07:25.610922    4588 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:07:25.610922    4588 command_runner.go:130] > KillMode=process
	I0610 12:07:25.610978    4588 command_runner.go:130] > [Install]
	I0610 12:07:25.610978    4588 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:07:25.624039    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:07:25.661400    4588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:07:25.720292    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:07:25.757987    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:07:25.796201    4588 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:07:25.863195    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:07:25.889245    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:07:25.926689    4588 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:07:25.939863    4588 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:07:25.945195    4588 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:07:25.958144    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:07:25.974980    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:07:26.023598    4588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:07:26.238985    4588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:07:26.451509    4588 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:07:26.451626    4588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:07:26.501126    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:26.701662    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:07:29.249741    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5480592s)
	I0610 12:07:29.262915    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:07:29.301406    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:07:29.341268    4588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:07:29.568906    4588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:07:29.785481    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:29.992495    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:07:30.037215    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:07:30.085524    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:30.300979    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:07:30.418219    4588 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:07:30.432434    4588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:07:30.441630    4588 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:07:30.441768    4588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:07:30.441768    4588 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0610 12:07:30.441768    4588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:07:30.441768    4588 command_runner.go:130] > Access: 2024-06-10 12:07:30.340771420 +0000
	I0610 12:07:30.441768    4588 command_runner.go:130] > Modify: 2024-06-10 12:07:30.340771420 +0000
	I0610 12:07:30.441768    4588 command_runner.go:130] > Change: 2024-06-10 12:07:30.344771436 +0000
	I0610 12:07:30.441768    4588 command_runner.go:130] >  Birth: -
	I0610 12:07:30.441768    4588 start.go:562] Will wait 60s for crictl version
	I0610 12:07:30.453463    4588 ssh_runner.go:195] Run: which crictl
	I0610 12:07:30.460096    4588 command_runner.go:130] > /usr/bin/crictl
	I0610 12:07:30.473201    4588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:07:30.530265    4588 command_runner.go:130] > Version:  0.1.0
	I0610 12:07:30.530298    4588 command_runner.go:130] > RuntimeName:  docker
	I0610 12:07:30.530298    4588 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:07:30.530298    4588 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:07:30.530453    4588 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:07:30.541045    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:07:30.577679    4588 command_runner.go:130] > 26.1.4
	I0610 12:07:30.586938    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:07:30.617216    4588 command_runner.go:130] > 26.1.4
	I0610 12:07:30.622417    4588 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:07:30.622417    4588 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:07:30.629450    4588 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:07:30.629450    4588 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:07:30.643235    4588 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:07:30.649840    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:07:30.670389    4588 kubeadm.go:877] updating cluster {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 12:07:30.670389    4588 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:07:30.679574    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:07:30.702356    4588 docker.go:685] Got preloaded images: 
	I0610 12:07:30.702356    4588 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0610 12:07:30.713877    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 12:07:30.734201    4588 command_runner.go:139] > {"Repositories":{}}
	I0610 12:07:30.745928    4588 ssh_runner.go:195] Run: which lz4
	I0610 12:07:30.752458    4588 command_runner.go:130] > /usr/bin/lz4
	I0610 12:07:30.752458    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 12:07:30.763475    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 12:07:30.769540    4588 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 12:07:30.770227    4588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 12:07:30.770389    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0610 12:07:32.729738    4588 docker.go:649] duration metric: took 1.9762697s to copy over tarball
	I0610 12:07:32.743906    4588 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 12:07:41.714684    4588 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9705398s)
	I0610 12:07:41.714777    4588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 12:07:41.787089    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 12:07:41.807203    4588 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0610 12:07:41.807257    4588 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0610 12:07:41.859157    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:42.090821    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:07:44.907266    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8158182s)
	I0610 12:07:44.919479    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 12:07:44.944175    4588 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:07:44.946511    4588 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 12:07:44.946557    4588 cache_images.go:84] Images are preloaded, skipping loading
	I0610 12:07:44.946658    4588 kubeadm.go:928] updating node { 172.17.159.171 8443 v1.30.1 docker true true} ...
	I0610 12:07:44.946933    4588 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.159.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:07:44.956339    4588 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 12:07:44.991381    4588 command_runner.go:130] > cgroupfs
	I0610 12:07:44.992435    4588 cni.go:84] Creating CNI manager for ""
	I0610 12:07:44.992435    4588 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 12:07:44.992435    4588 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 12:07:44.992562    4588 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.159.171 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-813300 NodeName:multinode-813300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.159.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.159.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 12:07:44.992992    4588 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.159.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-813300"
	  kubeletExtraArgs:
	    node-ip: 172.17.159.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.159.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 12:07:45.005272    4588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:07:45.024093    4588 command_runner.go:130] > kubeadm
	I0610 12:07:45.024093    4588 command_runner.go:130] > kubectl
	I0610 12:07:45.024093    4588 command_runner.go:130] > kubelet
	I0610 12:07:45.024093    4588 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 12:07:45.037363    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 12:07:45.055298    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0610 12:07:45.086932    4588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:07:45.118552    4588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0610 12:07:45.162013    4588 ssh_runner.go:195] Run: grep 172.17.159.171	control-plane.minikube.internal$ /etc/hosts
	I0610 12:07:45.168121    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:07:45.202562    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:45.425101    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:07:45.455626    4588 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.159.171
	I0610 12:07:45.455626    4588 certs.go:194] generating shared ca certs ...
	I0610 12:07:45.455747    4588 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.456562    4588 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:07:45.456877    4588 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:07:45.457049    4588 certs.go:256] generating profile certs ...
	I0610 12:07:45.457786    4588 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key
	I0610 12:07:45.457868    4588 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.crt with IP's: []
	I0610 12:07:45.708342    4588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.crt ...
	I0610 12:07:45.708342    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.crt: {Name:mk54c1a1cec89ed140bb491b38817a3186ba7310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.709853    4588 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key ...
	I0610 12:07:45.709853    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key: {Name:mkf00743da8bbcad3b010f0cbb5cd0436ce14710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.710226    4588 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887
	I0610 12:07:45.710226    4588 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.159.171]
	I0610 12:07:45.907956    4588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887 ...
	I0610 12:07:45.907956    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887: {Name:mka8c1bb2a2baa00cc0af3681bd930d57ff75330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.909711    4588 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887 ...
	I0610 12:07:45.909711    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887: {Name:mkb18584b7bb3bb732e73307ae39bca648c3c22a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.910791    4588 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt
	I0610 12:07:45.926670    4588 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key
	I0610 12:07:45.927884    4588 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key
	I0610 12:07:45.928002    4588 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt with IP's: []
	I0610 12:07:46.173843    4588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt ...
	I0610 12:07:46.173843    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt: {Name:mkb418cf9d8991e80905755cce3c6f6de1ae9ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:46.174831    4588 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key ...
	I0610 12:07:46.174831    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key: {Name:mk51867a74a39076c910c5b47bfa2ded184ede24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:46.175803    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:07:46.175803    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 12:07:46.186849    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 12:07:46.187823    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:07:46.187823    4588 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:07:46.187823    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:07:46.187823    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:07:46.188815    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:07:46.188815    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:07:46.188815    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:07:46.188815    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:07:46.188815    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.189810    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.192830    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:07:46.241117    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:07:46.288030    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:07:46.335188    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:07:46.376270    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 12:07:46.423248    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 12:07:46.475484    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 12:07:46.527362    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 12:07:46.576727    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:07:46.624358    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:07:46.675098    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:07:46.722137    4588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 12:07:46.780283    4588 ssh_runner.go:195] Run: openssl version
	I0610 12:07:46.789810    4588 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:07:46.800778    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:07:46.837222    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.844961    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.845084    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.859483    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.867918    4588 command_runner.go:130] > b5213941
	I0610 12:07:46.882717    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:07:46.919428    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:07:46.952808    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.958882    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.958882    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.971190    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.980429    4588 command_runner.go:130] > 51391683
	I0610 12:07:46.998007    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:07:47.035525    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:07:47.070284    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.077578    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.078136    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.091592    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.100124    4588 command_runner.go:130] > 3ec20f2e
	I0610 12:07:47.115904    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:07:47.147726    4588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:07:47.154748    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:07:47.154748    4588 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:07:47.156073    4588 kubeadm.go:391] StartCluster: {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:07:47.164675    4588 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 12:07:47.200694    4588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 12:07:47.220824    4588 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0610 12:07:47.220824    4588 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0610 12:07:47.220824    4588 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0610 12:07:47.236087    4588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 12:07:47.265597    4588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:07:47.286023    4588 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:07:47.286107    4588 kubeadm.go:156] found existing configuration files:
	
	I0610 12:07:47.298886    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 12:07:47.316688    4588 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:07:47.317271    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:07:47.332217    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 12:07:47.363611    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 12:07:47.381321    4588 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:07:47.381903    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:07:47.393546    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 12:07:47.423937    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 12:07:47.440026    4588 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:07:47.440026    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:07:47.459787    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 12:07:47.496088    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 12:07:47.517579    4588 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:07:47.517579    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:07:47.528796    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 12:07:47.546992    4588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 12:07:47.980483    4588 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:07:47.980577    4588 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:08:01.301108    4588 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 12:08:01.301202    4588 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0610 12:08:01.301289    4588 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 12:08:01.301289    4588 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 12:08:01.301289    4588 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:08:01.301289    4588 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:08:01.301289    4588 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:08:01.301289    4588 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:08:01.302226    4588 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:08:01.302295    4588 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:08:01.302295    4588 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:08:01.302295    4588 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:08:01.305130    4588 out.go:204]   - Generating certificates and keys ...
	I0610 12:08:01.305388    4588 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 12:08:01.305388    4588 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 12:08:01.305588    4588 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 12:08:01.305588    4588 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 12:08:01.305751    4588 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 12:08:01.305751    4588 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 12:08:01.306003    4588 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0610 12:08:01.306003    4588 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 12:08:01.306299    4588 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 12:08:01.306299    4588 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0610 12:08:01.307259    4588 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.307345    4588 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.307672    4588 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 12:08:01.307672    4588 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 12:08:01.307672    4588 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:08:01.307672    4588 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:08:01.308946    4588 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:08:01.308946    4588 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:08:01.308946    4588 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:08:01.308946    4588 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:08:01.308946    4588 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:08:01.309472    4588 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:08:01.312844    4588 out.go:204]   - Booting up control plane ...
	I0610 12:08:01.312844    4588 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:08:01.313599    4588 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:08:01.313744    4588 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:08:01.313744    4588 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:08:01.313744    4588 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:08:01.313744    4588 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:08:01.314297    4588 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:08:01.314351    4588 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:08:01.314536    4588 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:08:01.314536    4588 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:08:01.314536    4588 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 12:08:01.314536    4588 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 12:08:01.315111    4588 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:08:01.315111    4588 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:08:01.315111    4588 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:08:01.315111    4588 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:08:01.315111    4588 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002683261s
	I0610 12:08:01.315111    4588 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002683261s
	I0610 12:08:01.315111    4588 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:08:01.315111    4588 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:08:01.315955    4588 command_runner.go:130] > [api-check] The API server is healthy after 7.00192s
	I0610 12:08:01.316020    4588 kubeadm.go:309] [api-check] The API server is healthy after 7.00192s
	I0610 12:08:01.316205    4588 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:08:01.316285    4588 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:08:01.316552    4588 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:08:01.316552    4588 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:08:01.316784    4588 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:08:01.316861    4588 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:08:01.317080    4588 kubeadm.go:309] [mark-control-plane] Marking the node multinode-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:08:01.317295    4588 command_runner.go:130] > [mark-control-plane] Marking the node multinode-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:08:01.317406    4588 kubeadm.go:309] [bootstrap-token] Using token: d6w50f.8d5fdo5xwqangh2s
	I0610 12:08:01.317406    4588 command_runner.go:130] > [bootstrap-token] Using token: d6w50f.8d5fdo5xwqangh2s
	I0610 12:08:01.321841    4588 out.go:204]   - Configuring RBAC rules ...
	I0610 12:08:01.322484    4588 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:08:01.322549    4588 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:08:01.322728    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:08:01.322728    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:08:01.323029    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:08:01.323029    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:08:01.323184    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:08:01.323184    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:08:01.323458    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:08:01.323458    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:08:01.323458    4588 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:08:01.323458    4588 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:08:01.323458    4588 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:08:01.323458    4588 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:08:01.323458    4588 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 12:08:01.323458    4588 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 12:08:01.323458    4588 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 12:08:01.323458    4588 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 12:08:01.323458    4588 kubeadm.go:309] 
	I0610 12:08:01.323458    4588 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0610 12:08:01.323458    4588 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 12:08:01.324750    4588 kubeadm.go:309] 
	I0610 12:08:01.324822    4588 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0610 12:08:01.324822    4588 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 12:08:01.324822    4588 kubeadm.go:309] 
	I0610 12:08:01.324822    4588 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0610 12:08:01.324822    4588 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 12:08:01.324822    4588 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:08:01.324822    4588 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:08:01.325344    4588 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:08:01.325383    4588 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:08:01.325383    4588 kubeadm.go:309] 
	I0610 12:08:01.325530    4588 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 12:08:01.325530    4588 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0610 12:08:01.325530    4588 kubeadm.go:309] 
	I0610 12:08:01.325530    4588 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:08:01.325530    4588 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:08:01.325530    4588 kubeadm.go:309] 
	I0610 12:08:01.325530    4588 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 12:08:01.325530    4588 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0610 12:08:01.325530    4588 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:08:01.326068    4588 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:08:01.326160    4588 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:08:01.326160    4588 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:08:01.326160    4588 kubeadm.go:309] 
	I0610 12:08:01.326435    4588 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:08:01.326435    4588 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:08:01.326712    4588 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 12:08:01.326712    4588 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0610 12:08:01.326712    4588 kubeadm.go:309] 
	I0610 12:08:01.327011    4588 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.327011    4588 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.327428    4588 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 \
	I0610 12:08:01.327428    4588 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 \
	I0610 12:08:01.327428    4588 kubeadm.go:309] 	--control-plane 
	I0610 12:08:01.327574    4588 command_runner.go:130] > 	--control-plane 
	I0610 12:08:01.327574    4588 kubeadm.go:309] 
	I0610 12:08:01.327749    4588 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:08:01.327749    4588 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:08:01.327749    4588 kubeadm.go:309] 
	I0610 12:08:01.327914    4588 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.327914    4588 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.328143    4588 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 12:08:01.328143    4588 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 12:08:01.328143    4588 cni.go:84] Creating CNI manager for ""
	I0610 12:08:01.328143    4588 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 12:08:01.330463    4588 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 12:08:01.347784    4588 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 12:08:01.356731    4588 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 12:08:01.356776    4588 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0610 12:08:01.356776    4588 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0610 12:08:01.356776    4588 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 12:08:01.356776    4588 command_runner.go:130] > Access: 2024-06-10 12:05:58.512184000 +0000
	I0610 12:08:01.356776    4588 command_runner.go:130] > Modify: 2024-06-06 15:35:25.000000000 +0000
	I0610 12:08:01.356867    4588 command_runner.go:130] > Change: 2024-06-10 12:05:49.137000000 +0000
	I0610 12:08:01.356867    4588 command_runner.go:130] >  Birth: -
	I0610 12:08:01.356957    4588 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 12:08:01.357012    4588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 12:08:01.407001    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 12:08:01.826713    4588 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0610 12:08:01.826713    4588 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0610 12:08:01.826713    4588 command_runner.go:130] > serviceaccount/kindnet created
	I0610 12:08:01.826713    4588 command_runner.go:130] > daemonset.apps/kindnet created
	I0610 12:08:01.826855    4588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:08:01.841874    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-813300 minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=multinode-813300 minikube.k8s.io/primary=true
	I0610 12:08:01.841874    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:01.858654    4588 command_runner.go:130] > -16
	I0610 12:08:01.858754    4588 ops.go:34] apiserver oom_adj: -16
	I0610 12:08:02.040074    4588 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0610 12:08:02.040074    4588 command_runner.go:130] > node/multinode-813300 labeled
	I0610 12:08:02.055746    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:02.215756    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:02.564403    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:02.693633    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:03.066156    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:03.182182    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:03.552354    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:03.668708    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:04.061778    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:04.182269    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:04.561683    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:04.679824    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:05.065077    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:05.178135    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:05.563037    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:05.683240    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:06.069595    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:06.198551    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:06.567615    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:06.687919    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:07.059024    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:07.199437    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:07.559042    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:07.674044    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:08.065565    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:08.190015    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:08.564648    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:08.688052    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:09.069032    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:09.202107    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:09.560025    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:09.676786    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:10.062974    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:10.186607    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:10.564610    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:10.698529    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:11.060307    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:11.191152    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:11.563418    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:11.690517    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:12.054085    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:12.189950    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:12.562729    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:12.677893    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:13.067953    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:13.195579    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:13.558883    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:13.682493    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:14.061302    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:14.183257    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:14.567678    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:14.763665    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:15.056289    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:15.186893    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:15.564117    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:15.696782    4588 command_runner.go:130] > NAME      SECRETS   AGE
	I0610 12:08:15.696824    4588 command_runner.go:130] > default   0         0s
	I0610 12:08:15.696888    4588 kubeadm.go:1107] duration metric: took 13.8699211s to wait for elevateKubeSystemPrivileges
	W0610 12:08:15.696888    4588 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 12:08:15.696888    4588 kubeadm.go:393] duration metric: took 28.5406976s to StartCluster
	I0610 12:08:15.696888    4588 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:08:15.696888    4588 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:15.699411    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:08:15.700711    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 12:08:15.700711    4588 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 12:08:15.704964    4588 out.go:177] * Verifying Kubernetes components...
	I0610 12:08:15.700711    4588 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:08:15.701382    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:08:15.707565    4588 addons.go:69] Setting storage-provisioner=true in profile "multinode-813300"
	I0610 12:08:15.707565    4588 addons.go:69] Setting default-storageclass=true in profile "multinode-813300"
	I0610 12:08:15.707565    4588 addons.go:234] Setting addon storage-provisioner=true in "multinode-813300"
	I0610 12:08:15.707565    4588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-813300"
	I0610 12:08:15.707565    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:08:15.708184    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:15.709164    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:15.721781    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:08:16.014416    4588 command_runner.go:130] > apiVersion: v1
	I0610 12:08:16.014416    4588 command_runner.go:130] > data:
	I0610 12:08:16.014416    4588 command_runner.go:130] >   Corefile: |
	I0610 12:08:16.014416    4588 command_runner.go:130] >     .:53 {
	I0610 12:08:16.014416    4588 command_runner.go:130] >         errors
	I0610 12:08:16.014416    4588 command_runner.go:130] >         health {
	I0610 12:08:16.014416    4588 command_runner.go:130] >            lameduck 5s
	I0610 12:08:16.014416    4588 command_runner.go:130] >         }
	I0610 12:08:16.014416    4588 command_runner.go:130] >         ready
	I0610 12:08:16.014416    4588 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0610 12:08:16.014416    4588 command_runner.go:130] >            pods insecure
	I0610 12:08:16.014416    4588 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0610 12:08:16.014416    4588 command_runner.go:130] >            ttl 30
	I0610 12:08:16.014416    4588 command_runner.go:130] >         }
	I0610 12:08:16.014416    4588 command_runner.go:130] >         prometheus :9153
	I0610 12:08:16.014416    4588 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0610 12:08:16.014416    4588 command_runner.go:130] >            max_concurrent 1000
	I0610 12:08:16.014416    4588 command_runner.go:130] >         }
	I0610 12:08:16.014416    4588 command_runner.go:130] >         cache 30
	I0610 12:08:16.014416    4588 command_runner.go:130] >         loop
	I0610 12:08:16.014416    4588 command_runner.go:130] >         reload
	I0610 12:08:16.014416    4588 command_runner.go:130] >         loadbalance
	I0610 12:08:16.014416    4588 command_runner.go:130] >     }
	I0610 12:08:16.014416    4588 command_runner.go:130] > kind: ConfigMap
	I0610 12:08:16.014416    4588 command_runner.go:130] > metadata:
	I0610 12:08:16.014416    4588 command_runner.go:130] >   creationTimestamp: "2024-06-10T12:08:00Z"
	I0610 12:08:16.014416    4588 command_runner.go:130] >   name: coredns
	I0610 12:08:16.014416    4588 command_runner.go:130] >   namespace: kube-system
	I0610 12:08:16.014416    4588 command_runner.go:130] >   resourceVersion: "223"
	I0610 12:08:16.014416    4588 command_runner.go:130] >   uid: 6b6b1b18-8340-404c-ad83-066f280bc1b8
	I0610 12:08:16.014416    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 12:08:16.117425    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:08:16.455420    4588 command_runner.go:130] > configmap/coredns replaced
	I0610 12:08:16.455504    4588 start.go:946] {"host.minikube.internal": 172.17.144.1} host record injected into CoreDNS's ConfigMap
	I0610 12:08:16.457151    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:16.457151    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:16.457851    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:08:16.457851    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:08:16.459915    4588 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 12:08:16.460479    4588 node_ready.go:35] waiting up to 6m0s for node "multinode-813300" to be "Ready" ...
	I0610 12:08:16.460479    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:16.460479    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.460479    4588 round_trippers.go:463] GET https://172.17.159.171:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 12:08:16.460479    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.460479    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.460479    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.460479    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.460479    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.477494    4588 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0610 12:08:16.477494    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Audit-Id: 5d9cb475-9eb4-490b-84cb-48947c853346
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.477690    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.477690    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Content-Length: 291
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.477690    4588 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"362","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.477690    4588 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0610 12:08:16.477690    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.478258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.478258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.478258    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.478258    4588 round_trippers.go:580]     Audit-Id: a0a248f5-f010-49bd-be88-f9ce21911653
	I0610 12:08:16.478258    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.478536    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:16.478622    4588 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"362","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.478747    4588 round_trippers.go:463] PUT https://172.17.159.171:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 12:08:16.478747    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.478747    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.478747    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.478747    4588 round_trippers.go:473]     Content-Type: application/json
	I0610 12:08:16.494772    4588 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0610 12:08:16.495065    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.495065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Content-Length: 291
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Audit-Id: d535bcf1-d6e3-4914-8855-21dc33661312
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.495065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.495137    4588 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"364","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.969579    4588 round_trippers.go:463] GET https://172.17.159.171:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 12:08:16.969579    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.969579    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.969579    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:16.969579    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.969579    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.969579    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.969579    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.973208    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:16.973208    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.973208    4588 round_trippers.go:580]     Audit-Id: 72e9d5e3-bcfa-467a-b56b-e353a5261918
	I0610 12:08:16.973208    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.973208    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.973208    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.973665    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.973665    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.973920    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:16.973920    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.974025    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.974025    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.974025    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.974124    4588 round_trippers.go:580]     Content-Length: 291
	I0610 12:08:16.974025    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:16.974124    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.974257    4588 round_trippers.go:580]     Audit-Id: 606c7d1b-8607-486b-901e-1a37f0e7b82a
	I0610 12:08:16.974334    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.974445    4588 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"374","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.974850    4588 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-813300" context rescaled to 1 replicas
	I0610 12:08:17.461815    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:17.461815    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:17.461815    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:17.461815    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:17.466181    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:17.466181    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:17.466181    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:17.466624    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:17.466624    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:17.466624    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:17 GMT
	I0610 12:08:17.466624    4588 round_trippers.go:580]     Audit-Id: f25e967e-f2a6-43d3-b020-a71c67099236
	I0610 12:08:17.466624    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:17.466865    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:17.969784    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:17.969784    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:17.969784    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:17.969784    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:17.973880    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:17.974417    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:17.974417    4588 round_trippers.go:580]     Audit-Id: b880d804-4a72-46ac-a1eb-64811f820ef2
	I0610 12:08:17.974417    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:17.974417    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:17.974505    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:17.974505    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:17.974505    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:17 GMT
	I0610 12:08:17.974850    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:18.148749    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:18.148749    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:18.151774    4588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:08:18.148749    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:18.155349    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:18.155349    4588 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:08:18.155349    4588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 12:08:18.155349    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:18.155769    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:18.156778    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:08:18.157762    4588 addons.go:234] Setting addon default-storageclass=true in "multinode-813300"
	I0610 12:08:18.157762    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:08:18.158791    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:18.463954    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:18.464224    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:18.464224    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:18.464224    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:18.468817    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:18.468866    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:18.468866    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:18.468866    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:18 GMT
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Audit-Id: 08ba8b87-2ebe-4b1a-9bc7-7fc5017e34d1
	I0610 12:08:18.469449    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:18.469798    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:18.972076    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:18.972076    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:18.972076    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:18.972076    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:18.975651    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:18.975651    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:18.976021    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:18.976021    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:18 GMT
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Audit-Id: 9c65fa4d-0b55-4681-a48a-3b1a4dbb54ce
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:18.976441    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:19.462801    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:19.462801    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:19.462801    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:19.462801    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:19.466510    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:19.466510    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Audit-Id: 71bb3ada-5b1d-4303-8b49-627cb8297316
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:19.467506    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:19.467506    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:19 GMT
	I0610 12:08:19.467506    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:19.971420    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:19.971420    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:19.971517    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:19.971517    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:19.974973    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:19.974973    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:19.974973    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:19.974973    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:19.975460    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:19.975460    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:19 GMT
	I0610 12:08:19.975460    4588 round_trippers.go:580]     Audit-Id: 8cd747ea-2235-458e-8465-b8e6dd798dc6
	I0610 12:08:19.975460    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:19.975966    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:20.464847    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:20.465278    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:20.465387    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:20.465387    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:20.469653    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:20.469653    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:20.469653    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:20.469653    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:20 GMT
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Audit-Id: 77e2b9d7-6f2e-498f-b2b6-39850d5cf023
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:20.470875    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:20.471154    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:20.673653    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:20.673741    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:20.673874    4588 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 12:08:20.673874    4588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 12:08:20.673943    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:20.675325    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:20.675325    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:20.675325    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:08:20.971415    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:20.971628    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:20.971628    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:20.971628    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:20.977135    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:20.977726    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Audit-Id: 85b2432c-b255-446d-91a8-0de43d9b76ca
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:20.977726    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:20.977726    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:20 GMT
	I0610 12:08:20.978131    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:21.462028    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:21.462138    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:21.462213    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:21.462213    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:21.465088    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:21.465888    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:21.465888    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:21.465888    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:21.465888    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:21.466013    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:21.466013    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:21 GMT
	I0610 12:08:21.466013    4588 round_trippers.go:580]     Audit-Id: 8e7bfa2d-47b3-45cc-a081-3540ba8a26c7
	I0610 12:08:21.466463    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:21.972657    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:21.972657    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:21.972657    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:21.972657    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:21.977058    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:21.977058    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:21.977134    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:21.977134    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:21.977134    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:21.977134    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:21.977218    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:21 GMT
	I0610 12:08:21.977218    4588 round_trippers.go:580]     Audit-Id: 09c72934-2b71-461a-b4fd-0e14aaaf73b0
	I0610 12:08:21.977477    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:22.465513    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:22.465513    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:22.465581    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:22.465581    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:22.468907    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:22.468907    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:22.468907    4588 round_trippers.go:580]     Audit-Id: 046c63ca-5191-4136-ba48-0368a7e8d11c
	I0610 12:08:22.468907    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:22.468907    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:22.469891    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:22.469891    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:22.469891    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:22 GMT
	I0610 12:08:22.469891    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:22.972701    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:22.972701    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:22.972701    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:22.972701    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:22.976321    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:22.976321    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:22.976321    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:22.976321    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:22 GMT
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Audit-Id: cbf84943-c01b-45e1-b8d0-c6fbf9f578a4
	I0610 12:08:22.977441    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:22.977790    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:23.167919    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:23.168510    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:23.168510    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:08:23.467192    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:23.467263    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:23.467263    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:23.467263    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:23.470722    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:23.471197    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:23.471197    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:23 GMT
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Audit-Id: 15d64748-9238-483a-8170-ffc83f1d908d
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:23.471197    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:23.471538    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:23.612259    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:08:23.612340    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:23.612790    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:08:23.770726    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:08:23.973469    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:24.067126    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:24.067126    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:24.067126    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:24.071456    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:24.071456    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Audit-Id: 3f7761c1-775f-479a-926e-e6e225ae5297
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:24.071456    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:24.071456    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:24 GMT
	I0610 12:08:24.071917    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:24.381409    4588 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0610 12:08:24.381600    4588 command_runner.go:130] > pod/storage-provisioner created
	I0610 12:08:24.466424    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:24.466616    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:24.466616    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:24.466616    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:24.469640    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:24.471213    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Audit-Id: 644ee470-8778-4b97-ade1-3d396880a3eb
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:24.471250    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:24.471250    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:24 GMT
	I0610 12:08:24.471668    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:24.975984    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:24.975984    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:24.976290    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:24.976290    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:24.979743    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:24.979743    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Audit-Id: 577d1627-ffbf-4769-b31e-54336e194420
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:24.979743    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:24.979743    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:24 GMT
	I0610 12:08:24.980589    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:24.981314    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:25.467082    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:25.467082    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:25.467082    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:25.467405    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:25.471429    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:25.471429    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:25.471429    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:25.471429    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:25 GMT
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Audit-Id: b22a7791-024c-48c8-a3d0-60f86c7bd039
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:25.471826    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:25.970625    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:25.970625    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:25.970625    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:25.970625    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:25.975518    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:25.975586    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:25.975586    4588 round_trippers.go:580]     Audit-Id: d71153d6-4e44-462d-ae60-2161aced6f71
	I0610 12:08:25.975586    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:25.975668    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:25.975668    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:25.975668    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:25.975668    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:25 GMT
	I0610 12:08:25.975668    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:26.019285    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:08:26.019893    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:26.020248    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:08:26.163944    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 12:08:26.337920    4588 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0610 12:08:26.338319    4588 round_trippers.go:463] GET https://172.17.159.171:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 12:08:26.338580    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.338580    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.338704    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.349001    4588 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 12:08:26.350011    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.350011    4588 round_trippers.go:580]     Audit-Id: 6617c405-50a5-4bfc-aadb-527dd013680d
	I0610 12:08:26.350011    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.350011    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.350063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.350063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.350063    4588 round_trippers.go:580]     Content-Length: 1273
	I0610 12:08:26.350063    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.350188    4588 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"3c2bb998-bd12-48de-88bb-ef852d4ef17b","resourceVersion":"402","creationTimestamp":"2024-06-10T12:08:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T12:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0610 12:08:26.351049    4588 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3c2bb998-bd12-48de-88bb-ef852d4ef17b","resourceVersion":"402","creationTimestamp":"2024-06-10T12:08:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T12:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 12:08:26.351165    4588 round_trippers.go:463] PUT https://172.17.159.171:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 12:08:26.351165    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.351165    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.351165    4588 round_trippers.go:473]     Content-Type: application/json
	I0610 12:08:26.351231    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.354220    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:26.354220    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Audit-Id: 3328ace5-f8a9-432f-95d6-2e022f2f96ba
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.355159    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.355159    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.355159    4588 round_trippers.go:580]     Content-Length: 1220
	I0610 12:08:26.355159    4588 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3c2bb998-bd12-48de-88bb-ef852d4ef17b","resourceVersion":"402","creationTimestamp":"2024-06-10T12:08:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T12:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 12:08:26.359449    4588 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 12:08:26.362054    4588 addons.go:510] duration metric: took 10.6612568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0610 12:08:26.472340    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:26.472340    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.472340    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.472340    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.476989    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:26.476989    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.476989    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.476989    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.477437    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.477437    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.477437    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.477437    4588 round_trippers.go:580]     Audit-Id: 2d7eac79-25bf-4e84-bec6-871d0084a72d
	I0610 12:08:26.477671    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:26.973673    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:26.973888    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.973888    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.973888    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.977273    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:26.977273    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.977273    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.977273    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.977273    4588 round_trippers.go:580]     Audit-Id: 4981fd01-235e-4c9f-9367-3a7de9313d0e
	I0610 12:08:26.977273    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.978045    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.978045    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.978205    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:27.462245    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:27.462245    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:27.462245    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:27.462340    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:27.467699    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:27.467699    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:27.467825    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:27.467825    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:27 GMT
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Audit-Id: c9d0a77d-a57e-4d70-84a2-e398f5ffa765
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:27.468099    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:27.469115    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:27.960920    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:27.960920    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:27.960920    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:27.960920    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:27.965654    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:27.965654    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:27.965654    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:27.965654    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:27.965654    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:27.966150    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:27 GMT
	I0610 12:08:27.966150    4588 round_trippers.go:580]     Audit-Id: 5d8201db-b32c-4acf-8ad6-345335bd6d2d
	I0610 12:08:27.966150    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:27.966354    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:28.474445    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:28.474445    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:28.474445    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:28.474445    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:28.482343    4588 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:08:28.482431    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:28.482431    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:28.482431    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:28 GMT
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Audit-Id: 31e80831-1c73-4c80-b784-0f1dce4ba371
	I0610 12:08:28.482431    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:28.961355    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:28.961600    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:28.961600    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:28.961600    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:28.965419    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:28.965419    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Audit-Id: 0bdf6c06-0223-405f-8706-dfbe77e36c8b
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:28.965419    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:28.965419    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:28 GMT
	I0610 12:08:28.966753    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:29.464161    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:29.464216    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:29.464216    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:29.464216    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:29.468789    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:29.468789    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:29 GMT
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Audit-Id: a565b77c-b1b9-4089-8623-2c276f67440d
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:29.469063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:29.469063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:29.469412    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:29.469971    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:29.962498    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:29.962498    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:29.962498    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:29.962498    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:29.967420    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:29.967881    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:29.967881    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:29.967881    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:29 GMT
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Audit-Id: 16709f5b-fb80-40b1-a6e2-9fdc0e2c33b6
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:29.967881    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:30.466094    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:30.466389    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.466389    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.466451    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.473102    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:08:30.473102    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.473102    4588 round_trippers.go:580]     Audit-Id: b7b3666c-e49c-4427-9cde-6abd578e055f
	I0610 12:08:30.473102    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.473102    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.473376    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.473376    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.473376    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:30 GMT
	I0610 12:08:30.473554    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:30.971452    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:30.971452    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.971586    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.971586    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.974265    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:30.974265    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.975194    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.975194    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:30 GMT
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Audit-Id: ee0e8e0f-291b-4fd4-a42f-a1ec6d75fd51
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.975506    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"407","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0610 12:08:30.975734    4588 node_ready.go:49] node "multinode-813300" has status "Ready":"True"
	I0610 12:08:30.975734    4588 node_ready.go:38] duration metric: took 14.5151365s for node "multinode-813300" to be "Ready" ...
	I0610 12:08:30.975734    4588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:08:30.975734    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:30.975734    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.975734    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.975734    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.981306    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:30.981425    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.981425    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:30 GMT
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Audit-Id: 938fb101-b66e-4d12-9cf6-8a418d730def
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.981425    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.982695    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0610 12:08:30.987017    4588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:30.987017    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:30.987017    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.987017    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.987017    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.991014    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:30.991014    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.991014    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.991014    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.991014    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.991014    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.991650    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:30.991650    4588 round_trippers.go:580]     Audit-Id: a20fe82f-5987-467b-a829-238d7f03bb9d
	I0610 12:08:30.992127    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:30.992583    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:30.992583    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.992583    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.992583    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.995139    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:30.995139    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.995139    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.995139    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.995139    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:30.995139    4588 round_trippers.go:580]     Audit-Id: 67280c1b-dd0e-4dd1-adff-518782aaded3
	I0610 12:08:30.995139    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.995736    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.995736    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"407","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0610 12:08:31.497373    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:31.497442    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.497442    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.497503    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:31.500007    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:31.500007    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Audit-Id: a1858d6a-493d-4307-88c5-562319ac0e90
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:31.500951    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:31.500951    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:31.504473    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:31.505489    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:31.505489    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.505489    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.505489    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:31.511925    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:08:31.512084    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Audit-Id: 75267635-50fe-4afc-8272-36f1623fe090
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:31.512084    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:31.512084    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:31.512456    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:31.989543    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:31.989543    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.989543    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.989543    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:31.993664    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:31.993817    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Audit-Id: ad39aa17-cc09-4f93-bf6b-cdc9adb39955
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:31.993817    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:31.993817    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:31.996841    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:31.997224    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:31.997758    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.997758    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.997758    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.002165    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:32.002165    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.002165    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:32.002165    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:32.002711    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:32.002711    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:32.002711    4588 round_trippers.go:580]     Audit-Id: e13fc67f-b777-4f9b-abfd-1f1127f85080
	I0610 12:08:32.002711    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.002926    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:32.495322    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:32.495503    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:32.495503    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:32.495503    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.499334    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:32.499334    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.499334    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:32.499334    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Audit-Id: 750ca129-89cc-4b31-978b-eb45c8205826
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:32.500108    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:32.500884    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:32.500884    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:32.500939    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:32.500939    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.505349    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:32.505349    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.505349    4588 round_trippers.go:580]     Audit-Id: 4bd7a5b6-e799-44ba-b894-becda2bbf011
	I0610 12:08:32.505349    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.505349    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:32.505349    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:32.505349    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:32.505887    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:32.506152    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:32.995187    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:32.995187    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:32.995187    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:32.995187    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.999219    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:32.999219    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Audit-Id: 58a497a3-7bd3-4807-989d-93a7abd2266d
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.000085    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.000085    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.000226    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0610 12:08:33.001482    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.001482    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.001482    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.001482    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.004802    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:33.004802    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.004802    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.004802    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.004802    4588 round_trippers.go:580]     Audit-Id: b39df611-6465-4b74-a9a3-b939651b43fe
	I0610 12:08:33.004802    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.005828    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.005974    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.006340    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.006877    4588 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.006877    4588 pod_ready.go:81] duration metric: took 2.0198434s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.006932    4588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.007046    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:08:33.007046    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.007046    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.007094    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.009577    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.009577    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.009577    4588 round_trippers.go:580]     Audit-Id: 76096531-167d-4f83-bd03-e7713e1e8d9d
	I0610 12:08:33.009577    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.009577    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.010082    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.010082    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.010082    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.010082    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"e48af956-8533-4b8e-be5d-0834484cbffa","resourceVersion":"385","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.159.171:2379","kubernetes.io/config.hash":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.mirror":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.seen":"2024-06-10T12:08:00.781973961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0610 12:08:33.010556    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.010556    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.010556    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.010556    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.013440    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.013440    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.013440    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.013440    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Audit-Id: 6b040327-de96-49d5-8e30-1c94f19e6445
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.014281    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.014698    4588 pod_ready.go:92] pod "etcd-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.014698    4588 pod_ready.go:81] duration metric: took 7.7654ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.014760    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.014878    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:08:33.014878    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.014908    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.014908    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.019251    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:33.019385    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.019385    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.019385    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Audit-Id: a56c64cd-4b78-4ec4-b317-d23c5bd91346
	I0610 12:08:33.019916    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"f824b391-b3d2-49ec-ba7d-863cb2150f81","resourceVersion":"386","creationTimestamp":"2024-06-10T12:07:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.159.171:8443","kubernetes.io/config.hash":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.mirror":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.seen":"2024-06-10T12:07:52.425003820Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:07:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0610 12:08:33.020589    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.020695    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.020695    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.020695    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.024226    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:33.024226    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Audit-Id: ba42cb6f-0b20-475d-81bb-08c0c2b424c1
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.024226    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.024226    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.024787    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.025075    4588 pod_ready.go:92] pod "kube-apiserver-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.025075    4588 pod_ready.go:81] duration metric: took 10.3143ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.025075    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.025075    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:08:33.025075    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.025075    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.025075    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.027688    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.027688    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Audit-Id: 627bf56d-7d78-4898-b65b-7e67c35b4b59
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.027688    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.027688    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.028800    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"384","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0610 12:08:33.029481    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.029481    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.029481    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.029481    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.031724    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.031724    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Audit-Id: 545f7fb9-5389-46a1-9ca7-54eea814ce0e
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.031724    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.031724    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.032537    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.033863    4588 pod_ready.go:92] pod "kube-controller-manager-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.034008    4588 pod_ready.go:81] duration metric: took 8.9332ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.034008    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.034008    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:08:33.034008    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.034008    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.034229    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.036496    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.036496    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.036496    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Audit-Id: 711cf59f-d3e3-4f21-a5db-187fe7f58c13
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.036496    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.036496    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"380","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0610 12:08:33.037906    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.037952    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.038071    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.038071    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.040362    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.040362    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.040362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Audit-Id: e6d94a88-bd9f-4626-b1c3-879d50c77dd8
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.040362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.041393    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.041808    4588 pod_ready.go:92] pod "kube-proxy-nrpvt" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.041808    4588 pod_ready.go:81] duration metric: took 7.8004ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.041877    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.195916    4588 request.go:629] Waited for 154.0375ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:08:33.196165    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:08:33.196165    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.196165    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.196232    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.202934    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:08:33.203372    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.203372    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.203372    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.203372    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.203439    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.203439    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.203439    4588 round_trippers.go:580]     Audit-Id: 3370c09f-361f-45e5-a7c2-7da8cdbd9831
	I0610 12:08:33.203622    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"387","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0610 12:08:33.400282    4588 request.go:629] Waited for 195.7136ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.400649    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.400673    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.400673    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.400673    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.403562    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.403562    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Audit-Id: dec8d733-a395-4375-9e53-c5161847aeac
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.403562    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.403562    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.404668    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.405082    4588 pod_ready.go:92] pod "kube-scheduler-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.405082    4588 pod_ready.go:81] duration metric: took 363.2018ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.405082    4588 pod_ready.go:38] duration metric: took 2.4293279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:08:33.405082    4588 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:08:33.419788    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:08:33.450414    4588 command_runner.go:130] > 1957
	I0610 12:08:33.450668    4588 api_server.go:72] duration metric: took 17.7498125s to wait for apiserver process to appear ...
	I0610 12:08:33.450668    4588 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:08:33.450668    4588 api_server.go:253] Checking apiserver healthz at https://172.17.159.171:8443/healthz ...
	I0610 12:08:33.458286    4588 api_server.go:279] https://172.17.159.171:8443/healthz returned 200:
	ok
	I0610 12:08:33.458286    4588 round_trippers.go:463] GET https://172.17.159.171:8443/version
	I0610 12:08:33.458286    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.458286    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.458286    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.462485    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:33.462485    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.462485    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.462485    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Content-Length: 263
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Audit-Id: 16c16afd-0fbc-487c-ad2f-457898147096
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.463107    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.463107    4588 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 12:08:33.463254    4588 api_server.go:141] control plane version: v1.30.1
	I0610 12:08:33.463254    4588 api_server.go:131] duration metric: took 12.5864ms to wait for apiserver health ...
	I0610 12:08:33.463316    4588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:08:33.605309    4588 request.go:629] Waited for 141.9539ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:33.605546    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:33.605546    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.605546    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.605546    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.611373    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:33.612010    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.612010    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.612080    4588 round_trippers.go:580]     Audit-Id: 8601551f-3309-4d3c-a243-c54f622ba627
	I0610 12:08:33.612080    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.612080    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.612080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.612080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.613396    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0610 12:08:33.616317    4588 system_pods.go:59] 8 kube-system pods found
	I0610 12:08:33.616317    4588 system_pods.go:61] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "etcd-multinode-813300" [e48af956-8533-4b8e-be5d-0834484cbffa] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-apiserver-multinode-813300" [f824b391-b3d2-49ec-ba7d-863cb2150f81] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:08:33.616317    4588 system_pods.go:74] duration metric: took 153.0001ms to wait for pod list to return data ...
	I0610 12:08:33.616317    4588 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:08:33.808138    4588 request.go:629] Waited for 191.1567ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/default/serviceaccounts
	I0610 12:08:33.808225    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/default/serviceaccounts
	I0610 12:08:33.808225    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.808225    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.808225    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.813003    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:33.813365    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Content-Length: 261
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Audit-Id: 53fadb3a-0bcd-4518-aaa6-0171143260ed
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.813365    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.813365    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.813459    4588 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2033967b-ff48-4641-b518-45705bf023c6","resourceVersion":"336","creationTimestamp":"2024-06-10T12:08:15Z"}}]}
	I0610 12:08:33.813646    4588 default_sa.go:45] found service account: "default"
	I0610 12:08:33.813646    4588 default_sa.go:55] duration metric: took 197.3272ms for default service account to be created ...
	I0610 12:08:33.813646    4588 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:08:34.013591    4588 request.go:629] Waited for 199.9428ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:34.013591    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:34.013591    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:34.013591    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:34.013591    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:34.019566    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:34.019566    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Audit-Id: ccddedc7-4912-4f64-a5db-e857ae601e77
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:34.019566    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:34.019566    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:34 GMT
	I0610 12:08:34.022328    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0610 12:08:34.025311    4588 system_pods.go:86] 8 kube-system pods found
	I0610 12:08:34.025311    4588 system_pods.go:89] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "etcd-multinode-813300" [e48af956-8533-4b8e-be5d-0834484cbffa] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-apiserver-multinode-813300" [f824b391-b3d2-49ec-ba7d-863cb2150f81] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:08:34.025447    4588 system_pods.go:126] duration metric: took 211.7988ms to wait for k8s-apps to be running ...
	I0610 12:08:34.025531    4588 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:08:34.036640    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:08:34.068920    4588 system_svc.go:56] duration metric: took 43.0864ms WaitForService to wait for kubelet
	I0610 12:08:34.068920    4588 kubeadm.go:576] duration metric: took 18.3680596s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:08:34.068920    4588 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:08:34.200619    4588 request.go:629] Waited for 131.5276ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes
	I0610 12:08:34.200701    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes
	I0610 12:08:34.200763    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:34.200763    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:34.200763    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:34.204676    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:34.204676    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:34.204676    4588 round_trippers.go:580]     Audit-Id: f224ea65-0cb9-4a1e-8a42-23d61494a02a
	I0610 12:08:34.204676    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:34.205255    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:34.205255    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:34.205255    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:34.205255    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:34 GMT
	I0610 12:08:34.205556    4588 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5013 chars]
	I0610 12:08:34.206165    4588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:08:34.206219    4588 node_conditions.go:123] node cpu capacity is 2
	I0610 12:08:34.206219    4588 node_conditions.go:105] duration metric: took 137.298ms to run NodePressure ...
	I0610 12:08:34.206273    4588 start.go:240] waiting for startup goroutines ...
	I0610 12:08:34.206302    4588 start.go:245] waiting for cluster config update ...
	I0610 12:08:34.206396    4588 start.go:254] writing updated cluster config ...
	I0610 12:08:34.210462    4588 out.go:177] 
	I0610 12:08:34.211951    4588 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:08:34.219370    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:08:34.219370    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:08:34.230682    4588 out.go:177] * Starting "multinode-813300-m02" worker node in "multinode-813300" cluster
	I0610 12:08:34.232875    4588 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:08:34.232875    4588 cache.go:56] Caching tarball of preloaded images
	I0610 12:08:34.232875    4588 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:08:34.232875    4588 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:08:34.233735    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:08:34.236944    4588 start.go:360] acquireMachinesLock for multinode-813300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:08:34.236944    4588 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300-m02"
	I0610 12:08:34.237615    4588 start.go:93] Provisioning new machine with config: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:08:34.237615    4588 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0610 12:08:34.239702    4588 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 12:08:34.239702    4588 start.go:159] libmachine.API.Create for "multinode-813300" (driver="hyperv")
	I0610 12:08:34.240395    4588 client.go:168] LocalClient.Create starting
	I0610 12:08:34.240700    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 12:08:34.240700    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:08:34.241203    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:08:34.241370    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 12:08:34.241548    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:08:34.241548    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:08:34.241738    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 12:08:36.262319    4588 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 12:08:36.262385    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:36.262385    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 12:08:38.140816    4588 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 12:08:38.141270    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:38.141270    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:08:39.735536    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:08:39.735536    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:39.735536    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:08:43.725162    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:08:43.725162    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:43.727495    4588 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:08:44.236510    4588 main.go:141] libmachine: Creating SSH key...
	I0610 12:08:44.388057    4588 main.go:141] libmachine: Creating VM...
	I0610 12:08:44.388057    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:08:47.561217    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:08:47.561217    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:47.561217    4588 main.go:141] libmachine: Using switch "Default Switch"
	I0610 12:08:47.561217    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:08:49.510281    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:08:49.510430    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:49.510430    4588 main.go:141] libmachine: Creating VHD
	I0610 12:08:49.510430    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 12:08:53.452049    4588 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 20794A7E-9F85-4605-9CFB-9AB5A2243F5C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 12:08:53.452049    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:53.452049    4588 main.go:141] libmachine: Writing magic tar header
	I0610 12:08:53.452049    4588 main.go:141] libmachine: Writing SSH key tar header
	I0610 12:08:53.463808    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 12:08:56.776237    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:08:56.776237    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:56.776915    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\disk.vhd' -SizeBytes 20000MB
	I0610 12:08:59.460936    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:08:59.460999    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:59.460999    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-813300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 12:09:03.294382    4588 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-813300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 12:09:03.295386    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:03.295486    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-813300-m02 -DynamicMemoryEnabled $false
	I0610 12:09:05.730826    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:05.731605    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:05.731605    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-813300-m02 -Count 2
	I0610 12:09:08.091225    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:08.091225    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:08.091389    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-813300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\boot2docker.iso'
	I0610 12:09:10.917877    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:10.918532    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:10.918532    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-813300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\disk.vhd'
	I0610 12:09:13.890119    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:13.891006    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:13.891060    4588 main.go:141] libmachine: Starting VM...
	I0610 12:09:13.891060    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300-m02
	I0610 12:09:17.217967    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:17.217967    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:17.218129    4588 main.go:141] libmachine: Waiting for host to start...
	I0610 12:09:17.218287    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:19.673262    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:19.673262    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:19.673574    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:22.445782    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:22.445782    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:23.455957    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:25.876321    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:25.876909    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:25.876979    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:28.622723    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:28.622723    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:29.627749    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:32.027877    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:32.027952    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:32.027991    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:34.791963    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:34.791963    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:35.800230    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:38.203051    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:38.203636    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:38.203636    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:40.963011    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:40.963011    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:41.973628    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:44.416582    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:44.416582    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:44.416582    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:47.254049    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:09:47.254049    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:47.254049    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:49.644892    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:49.644892    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:49.645559    4588 machine.go:94] provisionDockerMachine start ...
	I0610 12:09:49.645788    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:51.995513    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:51.995513    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:51.995513    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:54.722854    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:09:54.722854    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:54.729030    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:09:54.740222    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:09:54.741219    4588 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:09:54.870273    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:09:54.870349    4588 buildroot.go:166] provisioning hostname "multinode-813300-m02"
	I0610 12:09:54.870417    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:57.155923    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:57.156835    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:57.156835    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:59.869088    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:09:59.869870    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:59.876256    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:09:59.876256    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:09:59.876845    4588 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300-m02 && echo "multinode-813300-m02" | sudo tee /etc/hostname
	I0610 12:10:00.036418    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300-m02
	
	I0610 12:10:00.036539    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:02.352338    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:02.352338    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:02.352850    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:05.115922    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:05.116005    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:05.120761    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:05.121019    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:05.121019    4588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:10:05.266489    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:10:05.266489    4588 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:10:05.266489    4588 buildroot.go:174] setting up certificates
	I0610 12:10:05.266489    4588 provision.go:84] configureAuth start
	I0610 12:10:05.266489    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:07.629056    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:07.629289    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:07.629378    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:10.421266    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:10.422131    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:10.422131    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:12.788172    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:12.788347    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:12.788347    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:15.586195    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:15.586195    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:15.586847    4588 provision.go:143] copyHostCerts
	I0610 12:10:15.587004    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:10:15.587261    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:10:15.587261    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:10:15.587727    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:10:15.588865    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:10:15.589171    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:10:15.589171    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:10:15.589536    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:10:15.589840    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:10:15.590722    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:10:15.590722    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:10:15.591178    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:10:15.592371    4588 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300-m02 san=[127.0.0.1 172.17.151.128 localhost minikube multinode-813300-m02]
	I0610 12:10:15.916216    4588 provision.go:177] copyRemoteCerts
	I0610 12:10:15.928750    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:10:15.928750    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:18.250037    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:18.250938    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:18.250996    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:20.970158    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:20.971086    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:20.971674    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:10:21.079420    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1499555s)
	I0610 12:10:21.079420    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:10:21.079775    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:10:21.131679    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:10:21.132137    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 12:10:21.184128    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:10:21.184257    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 12:10:21.239558    4588 provision.go:87] duration metric: took 15.9729376s to configureAuth
	I0610 12:10:21.239632    4588 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:10:21.240051    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:10:21.240051    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:23.584229    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:23.584229    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:23.584318    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:26.362007    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:26.362153    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:26.368272    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:26.369078    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:26.369078    4588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:10:26.500066    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:10:26.500204    4588 buildroot.go:70] root file system type: tmpfs
	I0610 12:10:26.500502    4588 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:10:26.500502    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:28.830472    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:28.830822    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:28.830822    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:31.638236    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:31.638722    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:31.645248    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:31.645248    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:31.645990    4588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.159.171"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:10:31.817981    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.159.171
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:10:31.817981    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:34.157297    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:34.157297    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:34.157297    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:36.961294    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:36.962039    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:36.967778    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:36.968315    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:36.968475    4588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:10:39.155315    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:10:39.155315    4588 machine.go:97] duration metric: took 49.5093501s to provisionDockerMachine
	I0610 12:10:39.155315    4588 client.go:171] duration metric: took 2m4.9138483s to LocalClient.Create
	I0610 12:10:39.155867    4588 start.go:167] duration metric: took 2m4.9151413s to libmachine.API.Create "multinode-813300"
	I0610 12:10:39.155867    4588 start.go:293] postStartSetup for "multinode-813300-m02" (driver="hyperv")
	I0610 12:10:39.155986    4588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:10:39.168428    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:10:39.168428    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:41.493819    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:41.493819    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:41.493819    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:44.301123    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:44.301123    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:44.301723    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:10:44.414294    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2457575s)
	I0610 12:10:44.427480    4588 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:10:44.434767    4588 command_runner.go:130] > NAME=Buildroot
	I0610 12:10:44.434767    4588 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:10:44.434904    4588 command_runner.go:130] > ID=buildroot
	I0610 12:10:44.434904    4588 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:10:44.434904    4588 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:10:44.435037    4588 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:10:44.435068    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:10:44.435634    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:10:44.437223    4588 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:10:44.437223    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:10:44.450343    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:10:44.472867    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:10:44.524171    4588 start.go:296] duration metric: took 5.3682595s for postStartSetup
	I0610 12:10:44.527309    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:46.868202    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:46.868202    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:46.868202    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:49.582486    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:49.582486    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:49.583022    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:10:49.587441    4588 start.go:128] duration metric: took 2m15.3487158s to createHost
	I0610 12:10:49.587441    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:51.933279    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:51.933279    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:51.933844    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:54.672496    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:54.672834    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:54.677987    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:54.677987    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:54.678509    4588 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 12:10:54.806576    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718021454.812845033
	
	I0610 12:10:54.806642    4588 fix.go:216] guest clock: 1718021454.812845033
	I0610 12:10:54.806642    4588 fix.go:229] Guest: 2024-06-10 12:10:54.812845033 +0000 UTC Remote: 2024-06-10 12:10:49.587441 +0000 UTC m=+365.885567601 (delta=5.225404033s)
	I0610 12:10:54.806642    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:57.087646    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:57.087989    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:57.088094    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:59.860973    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:59.860973    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:59.866816    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:59.866884    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:59.866884    4588 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718021454
	I0610 12:11:00.015191    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:10:54 UTC 2024
	
	I0610 12:11:00.015191    4588 fix.go:236] clock set: Mon Jun 10 12:10:54 UTC 2024
	 (err=<nil>)
	I0610 12:11:00.015191    4588 start.go:83] releasing machines lock for "multinode-813300-m02", held for 2m25.7770525s
	I0610 12:11:00.015500    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:11:02.362997    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:02.362997    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:02.363073    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:05.203470    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:11:05.203551    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:05.208269    4588 out.go:177] * Found network options:
	I0610 12:11:05.211963    4588 out.go:177]   - NO_PROXY=172.17.159.171
	W0610 12:11:05.214531    4588 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:11:05.217146    4588 out.go:177]   - NO_PROXY=172.17.159.171
	W0610 12:11:05.219128    4588 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 12:11:05.221154    4588 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:11:05.223154    4588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:11:05.223154    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:11:05.233134    4588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:11:05.233134    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:11:07.621816    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:07.622943    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:10.545475    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:11:10.545604    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:10.546196    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:11:10.557804    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:11:10.557804    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:10.558804    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:11:10.655498    4588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0610 12:11:10.780338    4588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:11:10.780338    4588 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5571395s)
	I0610 12:11:10.780338    4588 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.5471587s)
	W0610 12:11:10.780338    4588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:11:10.792576    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:11:10.825526    4588 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:11:10.825771    4588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:11:10.825771    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:11:10.825771    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:11:10.868331    4588 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:11:10.886782    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:11:10.926185    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:11:10.951492    4588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:11:10.964107    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:11:10.998277    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:11:11.036407    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:11:11.071765    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:11:11.112069    4588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:11:11.147207    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:11:11.180467    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:11:11.213384    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:11:11.244518    4588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:11:11.263227    4588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:11:11.274302    4588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:11:11.307150    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:11.524102    4588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:11:11.560382    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:11:11.573859    4588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:11:11.598593    4588 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:11:11.598631    4588 command_runner.go:130] > [Unit]
	I0610 12:11:11.598631    4588 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:11:11.598668    4588 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:11:11.598668    4588 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:11:11.598668    4588 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:11:11.598668    4588 command_runner.go:130] > StartLimitBurst=3
	I0610 12:11:11.598668    4588 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:11:11.598727    4588 command_runner.go:130] > [Service]
	I0610 12:11:11.598727    4588 command_runner.go:130] > Type=notify
	I0610 12:11:11.598727    4588 command_runner.go:130] > Restart=on-failure
	I0610 12:11:11.598727    4588 command_runner.go:130] > Environment=NO_PROXY=172.17.159.171
	I0610 12:11:11.598727    4588 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:11:11.598727    4588 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:11:11.598863    4588 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:11:11.598863    4588 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:11:11.598863    4588 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:11:11.598863    4588 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:11:11.598863    4588 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:11:11.598963    4588 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:11:11.598963    4588 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:11:11.598963    4588 command_runner.go:130] > ExecStart=
	I0610 12:11:11.598963    4588 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:11:11.599028    4588 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:11:11.599028    4588 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:11:11.599028    4588 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:11:11.599028    4588 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:11:11.599028    4588 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:11:11.599028    4588 command_runner.go:130] > LimitCORE=infinity
	I0610 12:11:11.599028    4588 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:11:11.599028    4588 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:11:11.599140    4588 command_runner.go:130] > TasksMax=infinity
	I0610 12:11:11.599140    4588 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:11:11.599140    4588 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:11:11.599140    4588 command_runner.go:130] > Delegate=yes
	I0610 12:11:11.599140    4588 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:11:11.599140    4588 command_runner.go:130] > KillMode=process
	I0610 12:11:11.599140    4588 command_runner.go:130] > [Install]
	I0610 12:11:11.599140    4588 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:11:11.612843    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:11:11.652543    4588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:11:11.699581    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:11:11.738711    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:11:11.780078    4588 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:11:11.854242    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:11:11.887820    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:11:11.926828    4588 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:11:11.941661    4588 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:11:11.949084    4588 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:11:11.960762    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:11:11.987519    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:11:12.036700    4588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:11:12.255159    4588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:11:12.474321    4588 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:11:12.474461    4588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:11:12.521376    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:12.736988    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:11:15.281594    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5445856s)
	I0610 12:11:15.295747    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:11:15.337687    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:11:15.375551    4588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:11:15.617767    4588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:11:15.838434    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:16.049989    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:11:16.095406    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:11:16.132342    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:16.337717    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:11:16.465652    4588 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:11:16.479852    4588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:11:16.489205    4588 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:11:16.489286    4588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:11:16.489318    4588 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0610 12:11:16.489345    4588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:11:16.489345    4588 command_runner.go:130] > Access: 2024-06-10 12:11:16.374337285 +0000
	I0610 12:11:16.489394    4588 command_runner.go:130] > Modify: 2024-06-10 12:11:16.374337285 +0000
	I0610 12:11:16.489394    4588 command_runner.go:130] > Change: 2024-06-10 12:11:16.377337327 +0000
	I0610 12:11:16.489428    4588 command_runner.go:130] >  Birth: -
	I0610 12:11:16.489428    4588 start.go:562] Will wait 60s for crictl version
	I0610 12:11:16.501661    4588 ssh_runner.go:195] Run: which crictl
	I0610 12:11:16.508650    4588 command_runner.go:130] > /usr/bin/crictl
	I0610 12:11:16.522045    4588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:11:16.577734    4588 command_runner.go:130] > Version:  0.1.0
	I0610 12:11:16.577734    4588 command_runner.go:130] > RuntimeName:  docker
	I0610 12:11:16.577734    4588 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:11:16.577734    4588 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:11:16.577867    4588 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:11:16.586649    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:11:16.627174    4588 command_runner.go:130] > 26.1.4
	I0610 12:11:16.637565    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:11:16.672485    4588 command_runner.go:130] > 26.1.4
	I0610 12:11:16.677357    4588 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:11:16.680604    4588 out.go:177]   - env NO_PROXY=172.17.159.171
	I0610 12:11:16.682631    4588 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:11:16.690150    4588 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:11:16.690150    4588 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:11:16.703778    4588 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:11:16.711418    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:11:16.733435    4588 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:11:16.734138    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:11:16.734810    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:11:19.011757    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:19.012790    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:19.012790    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:11:19.013573    4588 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.151.128
	I0610 12:11:19.013573    4588 certs.go:194] generating shared ca certs ...
	I0610 12:11:19.013573    4588 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:11:19.013917    4588 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:11:19.014532    4588 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:11:19.014800    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:11:19.015170    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:11:19.015290    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:11:19.015688    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:11:19.016370    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:11:19.016618    4588 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:11:19.016812    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:11:19.017069    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:11:19.017245    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:11:19.017624    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:11:19.017944    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:11:19.017944    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.018393    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.018580    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.018708    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:11:19.074850    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:11:19.123648    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:11:19.175920    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:11:19.221951    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:11:19.276690    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:11:19.328081    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:11:19.391788    4588 ssh_runner.go:195] Run: openssl version
	I0610 12:11:19.402568    4588 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:11:19.420480    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:11:19.454097    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.461999    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.461999    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.475323    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.489426    4588 command_runner.go:130] > b5213941
	I0610 12:11:19.501484    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:11:19.534058    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:11:19.566004    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.572892    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.573207    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.584393    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.594218    4588 command_runner.go:130] > 51391683
	I0610 12:11:19.608435    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:11:19.641477    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:11:19.673326    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.680330    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.680882    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.692878    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.704044    4588 command_runner.go:130] > 3ec20f2e
	I0610 12:11:19.714906    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:11:19.746683    4588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:11:19.753164    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:11:19.753835    4588 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:11:19.753979    4588 kubeadm.go:928] updating node {m02 172.17.151.128 8443 v1.30.1 docker false true} ...
	I0610 12:11:19.753979    4588 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.151.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:11:19.766808    4588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:11:19.786670    4588 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0610 12:11:19.786670    4588 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 12:11:19.799248    4588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 12:11:19.819418    4588 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0610 12:11:19.819418    4588 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0610 12:11:19.820008    4588 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 12:11:19.820008    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 12:11:19.820186    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 12:11:19.837476    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:11:19.838584    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 12:11:19.841021    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 12:11:19.860269    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 12:11:19.860269    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 12:11:19.860899    4588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 12:11:19.860899    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 12:11:19.861094    4588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 12:11:19.861094    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 12:11:19.861150    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 12:11:19.875476    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 12:11:19.927216    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 12:11:19.928269    4588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 12:11:19.928622    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 12:11:21.395244    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0610 12:11:21.414600    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0610 12:11:21.454103    4588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:11:21.515630    4588 ssh_runner.go:195] Run: grep 172.17.159.171	control-plane.minikube.internal$ /etc/hosts
	I0610 12:11:21.522801    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:11:21.563217    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:21.775475    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:11:21.807974    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:11:21.808784    4588 start.go:316] joinCluster: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:11:21.808980    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 12:11:21.809040    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:11:24.214569    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:24.214569    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:24.215479    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:26.984919    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:11:26.984919    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:26.985727    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:11:27.193620    4588 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gf7439.c4abko5fnf4w17n8 --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 12:11:27.193620    4588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3845966s)
	I0610 12:11:27.193620    4588 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:11:27.193620    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gf7439.c4abko5fnf4w17n8 --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-813300-m02"
	I0610 12:11:27.412803    4588 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:11:29.260064    4588 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 12:11:29.260064    4588 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0610 12:11:29.260064    4588 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0610 12:11:29.260064    4588 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:11:29.260064    4588 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.502015791s
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0610 12:11:29.260185    4588 command_runner.go:130] > This node has joined the cluster:
	I0610 12:11:29.260185    4588 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0610 12:11:29.260185    4588 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0610 12:11:29.260185    4588 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0610 12:11:29.260185    4588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gf7439.c4abko5fnf4w17n8 --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-813300-m02": (2.0665485s)
	I0610 12:11:29.260308    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 12:11:29.477872    4588 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0610 12:11:29.694891    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-813300-m02 minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=multinode-813300 minikube.k8s.io/primary=false
	I0610 12:11:29.850112    4588 command_runner.go:130] > node/multinode-813300-m02 labeled
	I0610 12:11:29.850212    4588 start.go:318] duration metric: took 8.0413623s to joinCluster
	I0610 12:11:29.850367    4588 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:11:29.855200    4588 out.go:177] * Verifying Kubernetes components...
	I0610 12:11:29.851036    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:11:29.872060    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:30.101494    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:11:30.133140    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:11:30.133905    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:11:30.134653    4588 node_ready.go:35] waiting up to 6m0s for node "multinode-813300-m02" to be "Ready" ...
	I0610 12:11:30.135218    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:30.135218    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:30.135218    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:30.135218    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:30.154207    4588 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0610 12:11:30.154300    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:30 GMT
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Audit-Id: 120211c2-3f44-4da6-84af-a42103a0ca12
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:30.154300    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:30.154300    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:30.154462    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:30.640539    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:30.640539    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:30.640539    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:30.640539    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:30.648978    4588 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:11:30.648978    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:30.648978    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:30.648978    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:30 GMT
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Audit-Id: b18c775d-77ef-4caa-914c-7283fd55f1aa
	I0610 12:11:30.648978    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:31.145201    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:31.145282    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:31.145282    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:31.145282    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:31.152903    4588 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:11:31.152903    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:31.152903    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:31.152903    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:31 GMT
	I0610 12:11:31.152903    4588 round_trippers.go:580]     Audit-Id: 53a17888-1a8e-4851-8815-1bc758b4e0d1
	I0610 12:11:31.153005    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:31.153005    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:31.153005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:31.153005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:31.153133    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:31.642808    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:31.642895    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:31.642895    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:31.642895    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:31.646234    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:31.647170    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:31.647238    4588 round_trippers.go:580]     Audit-Id: 2c94ef73-ffa9-41c2-9f48-2d1eda7b40b0
	I0610 12:11:31.647238    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:31.647258    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:31.647258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:31.647258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:31.647258    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:31.647258    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:31 GMT
	I0610 12:11:31.647389    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:32.146589    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:32.146654    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:32.146654    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:32.146654    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:32.151245    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:32.151473    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:32.151473    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:32.151473    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:32 GMT
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Audit-Id: 02a02b92-b406-46fa-a89f-f11d3aa78b57
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:32.151619    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:32.152091    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:32.647908    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:32.647908    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:32.647908    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:32.647908    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:32.655278    4588 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:11:32.656309    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:32.656309    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:32.656309    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:32.656309    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:32.656381    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:32.656381    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:32.656381    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:32 GMT
	I0610 12:11:32.656381    4588 round_trippers.go:580]     Audit-Id: 8be91a38-9480-4dc6-bb32-e813479247b1
	I0610 12:11:32.656509    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:33.136161    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:33.136161    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:33.136161    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:33.136370    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:33.140480    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:33.140480    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:33.140480    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:33 GMT
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Audit-Id: 829ec5bb-9a54-441f-9a33-3fac4f603fda
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:33.140595    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:33.140677    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:33.649302    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:33.649302    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:33.649302    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:33.649302    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:33.653244    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:33.653244    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:33 GMT
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Audit-Id: f5522161-62a8-4be2-b191-8cee428580bd
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:33.653782    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:33.653782    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:33.653862    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:33.653862    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:34.140515    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:34.140774    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:34.140774    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:34.140774    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:34.144741    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:34.144836    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:34.144836    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:34.144917    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:34 GMT
	I0610 12:11:34.144998    4588 round_trippers.go:580]     Audit-Id: ffbf68f4-fcd8-46dd-aeb6-1bbbbe2cb644
	I0610 12:11:34.144998    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:34.144998    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:34.145028    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:34.145028    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:34.145028    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:34.641306    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:34.641355    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:34.641355    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:34.641395    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:34.648180    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:11:34.649068    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:34.649068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:34.649068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:34.649068    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:34.649068    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:34 GMT
	I0610 12:11:34.649158    4588 round_trippers.go:580]     Audit-Id: 3161a238-0ca8-4ad9-b851-e3ba727a1005
	I0610 12:11:34.649158    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:34.649158    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:34.649480    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:34.649960    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:35.141434    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:35.141434    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:35.141434    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:35.141544    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:35.144794    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:35.145459    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Audit-Id: 8bfb5db6-acd9-419a-a15c-52a9cae18cf4
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:35.145459    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:35.145459    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:35 GMT
	I0610 12:11:35.145647    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:35.649334    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:35.649334    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:35.649334    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:35.649334    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:35.654625    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:11:35.654625    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:35.654625    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:35.654625    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:35.654625    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:35.654625    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:35.654719    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:35 GMT
	I0610 12:11:35.654719    4588 round_trippers.go:580]     Audit-Id: 36583692-c8d0-4e9c-9ce6-c1c822dd5fa2
	I0610 12:11:35.654719    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:35.654755    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:36.140102    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:36.140102    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:36.140102    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:36.140102    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:36.143717    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:36.143988    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:36.143988    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:36.143988    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:36.143988    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:36 GMT
	I0610 12:11:36.143988    4588 round_trippers.go:580]     Audit-Id: 677c1be2-6b1f-4364-9375-811a12bc2d54
	I0610 12:11:36.144073    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:36.144073    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:36.144073    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:36.144299    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:36.647892    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:36.647892    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:36.647960    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:36.647960    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:36.652449    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:36.652449    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:36.653266    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:36.653266    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:36.653266    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:36.653266    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:36 GMT
	I0610 12:11:36.653367    4588 round_trippers.go:580]     Audit-Id: 0775cb60-f275-466b-beb7-fbd374a788eb
	I0610 12:11:36.653367    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:36.653367    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:36.653528    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:36.654008    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:37.140931    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:37.140931    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:37.140931    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:37.140931    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:37.145903    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:37.145903    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:37.145992    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:37.145992    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:37 GMT
	I0610 12:11:37.146069    4588 round_trippers.go:580]     Audit-Id: 0ae2f0c6-2a9b-45d0-a1d0-d6e366a1cda3
	I0610 12:11:37.146134    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:37.649232    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:37.649232    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:37.649232    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:37.649232    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:37.654247    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:11:37.654537    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:37.654537    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:37.654537    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:37 GMT
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Audit-Id: 6692a4c9-18ea-498b-9bac-d8956738e490
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:37.654750    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:38.140018    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:38.140097    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:38.140097    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:38.140097    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:38.143731    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:38.144482    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:38.144482    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:38.144482    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:38.144482    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:38.144569    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:38.144569    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:38.144569    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:38 GMT
	I0610 12:11:38.144569    4588 round_trippers.go:580]     Audit-Id: 8d344def-2d40-4c03-9670-8ae9d6a107b8
	I0610 12:11:38.144569    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:38.645605    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:38.645605    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:38.645605    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:38.645605    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:38.650198    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:38.650198    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:38.650198    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:38.650198    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:38.650198    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:38.650198    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:38 GMT
	I0610 12:11:38.650372    4588 round_trippers.go:580]     Audit-Id: 262d504f-c6bd-4fe3-8221-cde83d48b444
	I0610 12:11:38.650372    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:38.650372    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:38.650598    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:39.145556    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:39.145556    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:39.145556    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:39.145556    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:39.150540    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:39.151438    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:39.151438    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:39.151438    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:39 GMT
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Audit-Id: 97195732-aef3-4a63-8e27-d623b638c932
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:39.152316    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:39.152904    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:39.646188    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:39.646188    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:39.646188    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:39.646188    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:39.650273    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:39.650347    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:39.650347    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:39.650347    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:39 GMT
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Audit-Id: 63a802e5-f779-4df4-95b0-69698f33f890
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:39.650611    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:40.135464    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:40.135464    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:40.135464    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:40.135464    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:40.139465    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:40.139465    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:40 GMT
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Audit-Id: 3005cad6-5eb1-4e80-9df6-7f76602ade8f
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:40.140037    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:40.140037    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:40.140181    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:40.647037    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:40.647242    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:40.647242    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:40.647242    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:40.652362    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:11:40.652362    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:40.652362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:40.652362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:40 GMT
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Audit-Id: b703d6a1-f080-4fd1-a944-38afee287a18
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:40.652965    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:41.137147    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:41.137147    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:41.137147    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:41.137147    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:41.141766    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:41.141766    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:41.141766    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:41.141766    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:41 GMT
	I0610 12:11:41.141766    4588 round_trippers.go:580]     Audit-Id: 31939970-7805-4a89-9e76-a7fad299f03e
	I0610 12:11:41.142164    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:41.142164    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:41.142164    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:41.142304    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:41.644436    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:41.644493    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:41.644493    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:41.644493    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:41.648780    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:41.648780    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:41.648780    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:41.648780    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:41.648780    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:41.648780    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:41 GMT
	I0610 12:11:41.648780    4588 round_trippers.go:580]     Audit-Id: 0825d248-901f-4c1d-810e-5285b2152eed
	I0610 12:11:41.649725    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:41.649994    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:41.650452    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:42.136785    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:42.136785    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:42.136785    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:42.136785    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:42.140392    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:42.140392    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:42.140392    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:42.140392    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:42 GMT
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Audit-Id: 11b80fc3-7764-4796-b629-31a53e9d8efe
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:42.141123    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:42.646819    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:42.646819    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:42.646819    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:42.646819    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:42.651676    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:42.651676    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Audit-Id: 05161fa0-65a0-4dfa-9fce-c6366744f573
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:42.651676    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:42.651676    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:42 GMT
	I0610 12:11:42.652003    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:43.140233    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:43.140503    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:43.140503    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:43.140589    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:43.143984    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:43.143984    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:43.143984    4588 round_trippers.go:580]     Audit-Id: 048190cf-d8d4-4e7c-ad65-ba33997dd557
	I0610 12:11:43.144542    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:43.144542    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:43.144542    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:43.144542    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:43.144542    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:43 GMT
	I0610 12:11:43.144821    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:43.646980    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:43.646980    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:43.647093    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:43.647093    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:43.649867    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:43.650767    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:43.650767    4588 round_trippers.go:580]     Audit-Id: debea53e-3d89-46ce-9861-43438e7ef3fb
	I0610 12:11:43.650903    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:43.650903    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:43.650903    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:43.650903    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:43.650903    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:43 GMT
	I0610 12:11:43.650903    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:43.650903    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:44.141683    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:44.141759    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:44.141759    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:44.141759    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:44.146005    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:44.146005    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Audit-Id: fd0b413f-d703-4826-88f7-f92b964e7225
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:44.146005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:44.146005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:44 GMT
	I0610 12:11:44.146005    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:44.648434    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:44.648568    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:44.648568    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:44.648568    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:45.026766    4588 round_trippers.go:574] Response Status: 200 OK in 378 milliseconds
	I0610 12:11:45.026888    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Audit-Id: 2ffab90b-53ae-414a-a7af-dc244c1a0d38
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:45.026939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:45.026939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:45 GMT
	I0610 12:11:45.026939    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:45.150155    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:45.150155    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:45.150155    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:45.150155    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:45.154085    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:45.154085    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:45.154085    4588 round_trippers.go:580]     Audit-Id: 96ef9dbe-5664-4716-9850-3761e6347748
	I0610 12:11:45.154150    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:45.154150    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:45.154150    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:45.154150    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:45.154150    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:45 GMT
	I0610 12:11:45.154663    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:45.640479    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:45.640479    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:45.640479    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:45.640479    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:45.644051    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:45.644886    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:45.644886    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:45.644886    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:45 GMT
	I0610 12:11:45.644990    4588 round_trippers.go:580]     Audit-Id: 59a50b84-480f-4407-866c-91f7a741c38f
	I0610 12:11:45.645063    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:45.645140    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:45.645229    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:45.645297    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:46.144014    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:46.144073    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:46.144073    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:46.144073    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:46.147638    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:46.147638    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:46.147638    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:46.147638    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:46.147638    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:46 GMT
	I0610 12:11:46.147638    4588 round_trippers.go:580]     Audit-Id: ae143dec-a170-46f3-8120-7d6e3e03234a
	I0610 12:11:46.148117    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:46.148117    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:46.148620    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:46.148620    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:46.640820    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:46.640989    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:46.640989    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:46.641063    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:46.645172    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:46.645213    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Audit-Id: 847d5b54-5db6-4652-9704-c8c39063334c
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:46.645213    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:46.645213    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:46 GMT
	I0610 12:11:46.645213    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:47.141987    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:47.141987    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:47.141987    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:47.141987    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:47.145594    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:47.145594    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:47.145996    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:47.145996    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:47 GMT
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Audit-Id: 51c3741a-3779-4687-9675-ec8b78395d73
	I0610 12:11:47.146242    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:47.639611    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:47.639688    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:47.639688    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:47.639688    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:47.643746    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:47.643746    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:47.643746    4588 round_trippers.go:580]     Audit-Id: e150216b-0242-4c48-ba26-ceed233c4e9e
	I0610 12:11:47.643746    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:47.643877    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:47.643877    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:47.643877    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:47.643877    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:47 GMT
	I0610 12:11:47.644149    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:48.138285    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:48.138501    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:48.138501    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:48.138501    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:48.142963    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:48.142963    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:48.143652    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:48.143652    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:48 GMT
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Audit-Id: 1b862a76-f4a3-4be6-a4f2-bf278ed88005
	I0610 12:11:48.143747    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:48.650829    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:48.650909    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:48.650909    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:48.650909    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:48.660633    4588 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 12:11:48.660899    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Audit-Id: 17e5626d-5a6a-46d3-bc16-7e7057afeec3
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:48.660899    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:48.660899    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:48 GMT
	I0610 12:11:48.661433    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:48.661959    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:49.136114    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:49.136114    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:49.136114    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:49.136114    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:49.140691    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:49.140691    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Audit-Id: 0c770e35-ded7-43e1-876e-cb07a38fd2ec
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:49.141710    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:49.141710    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:49 GMT
	I0610 12:11:49.141900    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:49.649392    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:49.649667    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:49.649722    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:49.649722    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:49.656181    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:11:49.656181    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:49.656181    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:49.656181    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:49 GMT
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Audit-Id: f82a420f-5dd7-47d8-950d-49e3d39c7c47
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:49.656719    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:50.150676    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:50.150676    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:50.150676    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:50.150676    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:50.155265    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:50.155265    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:50.155265    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:50.155265    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:50 GMT
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Audit-Id: fef3067f-7dbf-4d79-bc69-c0238a7f6f1e
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:50.155735    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:50.649159    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:50.649159    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:50.649159    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:50.649159    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:50.653519    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:50.653519    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:50.653519    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:50.653519    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:50 GMT
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Audit-Id: 8688b0cf-3044-4665-8f85-fc7d50db907c
	I0610 12:11:50.653519    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:51.149572    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:51.149572    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.149572    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.149572    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.154215    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:51.154479    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.154479    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.154479    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Audit-Id: 212364d7-a337-45b2-9ccb-42587fa16fbd
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.154574    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:51.154574    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:51.636574    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:51.636574    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.636574    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.636574    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.648795    4588 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0610 12:11:51.648795    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.648795    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.648795    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.648795    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.648874    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.648874    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.648874    4588 round_trippers.go:580]     Audit-Id: 15cb6306-cb2e-42c9-90f9-f0ea78aa907e
	I0610 12:11:51.649046    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"640","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0610 12:11:51.649843    4588 node_ready.go:49] node "multinode-813300-m02" has status "Ready":"True"
	I0610 12:11:51.649913    4588 node_ready.go:38] duration metric: took 21.5150861s for node "multinode-813300-m02" to be "Ready" ...
	I0610 12:11:51.649913    4588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:11:51.649984    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:11:51.649984    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.649984    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.649984    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.658205    4588 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:11:51.658205    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.658205    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.658205    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Audit-Id: 4892c8a9-dc91-4772-83d2-aaf257434292
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.659421    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"640"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0610 12:11:51.663308    4588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.663308    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:11:51.663308    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.663308    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.663308    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.666480    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:51.666717    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.666717    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.666717    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Audit-Id: 29e5482f-5681-47f7-833b-ea8a2eaca847
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.666984    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0610 12:11:51.667673    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.667673    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.667673    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.667732    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.669455    4588 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 12:11:51.669455    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.669455    4588 round_trippers.go:580]     Audit-Id: bc194cc6-fd6f-420a-89b0-01f8d0a70bfd
	I0610 12:11:51.669455    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.670408    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.670408    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.670408    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.670408    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.670809    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.671358    4588 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.671358    4588 pod_ready.go:81] duration metric: took 8.0504ms for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.671358    4588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.671495    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:11:51.671592    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.671592    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.671657    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.673658    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.673658    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.673658    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.673658    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Audit-Id: 7b458228-14ae-4077-b82e-2cbe339be6a6
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.674781    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"e48af956-8533-4b8e-be5d-0834484cbffa","resourceVersion":"385","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.159.171:2379","kubernetes.io/config.hash":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.mirror":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.seen":"2024-06-10T12:08:00.781973961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0610 12:11:51.674781    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.675319    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.675319    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.675319    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.678378    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.678579    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.678579    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Audit-Id: 67628109-d0cf-4546-acc6-77a9b7f24051
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.678579    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.678984    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.678984    4588 pod_ready.go:92] pod "etcd-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.678984    4588 pod_ready.go:81] duration metric: took 7.6256ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.678984    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.678984    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:11:51.678984    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.679522    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.679522    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.681723    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.681723    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.682457    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Audit-Id: 006b6c27-a6c2-4581-9d6d-b3591452ff62
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.682457    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.682703    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"f824b391-b3d2-49ec-ba7d-863cb2150f81","resourceVersion":"386","creationTimestamp":"2024-06-10T12:07:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.159.171:8443","kubernetes.io/config.hash":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.mirror":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.seen":"2024-06-10T12:07:52.425003820Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:07:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0610 12:11:51.682824    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.682824    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.682824    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.682824    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.686165    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.686165    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Audit-Id: 1a7c9c37-ae20-4df4-9b97-f0c2a3dbc6bd
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.686272    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.686272    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.686558    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.687382    4588 pod_ready.go:92] pod "kube-apiserver-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.687439    4588 pod_ready.go:81] duration metric: took 8.4554ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.687516    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.687601    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:11:51.687601    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.687601    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.687601    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.690594    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.691080    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Audit-Id: 99614bca-e7d3-4d5a-bcd7-a928cb9b154e
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.691080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.691080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.691464    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"384","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0610 12:11:51.692144    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.692144    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.692144    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.692144    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.694634    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.694634    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.694634    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.694634    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.694984    4588 round_trippers.go:580]     Audit-Id: 32d4392b-f53e-46ab-be25-56be6d4cbf25
	I0610 12:11:51.694984    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.695078    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.695101    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.695358    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.695860    4588 pod_ready.go:92] pod "kube-controller-manager-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.695917    4588 pod_ready.go:81] duration metric: took 8.4006ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.695964    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.839454    4588 request.go:629] Waited for 143.1953ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:11:51.839923    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:11:51.839923    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.839923    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.839923    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.843515    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:51.843814    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.843814    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.843884    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.843884    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.843921    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.843921    4588 round_trippers.go:580]     Audit-Id: ae52edfd-adbd-41e2-9903-60b4ca215d9e
	I0610 12:11:51.843921    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.843921    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"380","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0610 12:11:52.037284    4588 request.go:629] Waited for 192.0358ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.037410    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.037470    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.037470    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.037470    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.041986    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:52.041986    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.041986    4588 round_trippers.go:580]     Audit-Id: 6f58beea-d4d9-4031-a26a-f0800096bfaa
	I0610 12:11:52.043065    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.043065    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.043065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.043065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.043065    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.043433    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:52.044120    4588 pod_ready.go:92] pod "kube-proxy-nrpvt" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:52.044181    4588 pod_ready.go:81] duration metric: took 348.2135ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.044181    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.249108    4588 request.go:629] Waited for 204.4773ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:11:52.249396    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:11:52.249396    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.249396    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.249396    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.253114    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:52.254189    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Audit-Id: 22ba6e39-243b-40db-98c8-3e627dba7115
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.254189    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.254189    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.254310    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rx2b2","generateName":"kube-proxy-","namespace":"kube-system","uid":"ce59a99b-a561-4598-9399-147f748433a2","resourceVersion":"622","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0610 12:11:52.451902    4588 request.go:629] Waited for 196.8687ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:52.452172    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:52.452172    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.452227    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.452227    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.456977    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:52.456977    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.456977    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.457882    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.457882    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.457882    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.457882    4588 round_trippers.go:580]     Audit-Id: 952f9251-dd4e-4d64-989c-68606172a0ae
	I0610 12:11:52.457882    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.458487    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"640","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0610 12:11:52.458526    4588 pod_ready.go:92] pod "kube-proxy-rx2b2" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:52.458526    4588 pod_ready.go:81] duration metric: took 414.2651ms for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.458526    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.638866    4588 request.go:629] Waited for 180.175ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:11:52.639129    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:11:52.639129    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.639129    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.639129    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.642844    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:52.642844    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Audit-Id: 812d93e6-be52-4acc-b0ac-ecbab159315b
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.642844    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.642844    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.643940    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"387","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0610 12:11:52.842848    4588 request.go:629] Waited for 197.3782ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.843029    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.843029    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.843029    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.843029    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.846380    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:52.846380    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.847068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Audit-Id: 4d4f8b3e-cb53-4801-94ee-6aeaebe31fb6
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.847068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.847544    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:52.848051    4588 pod_ready.go:92] pod "kube-scheduler-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:52.848114    4588 pod_ready.go:81] duration metric: took 389.5849ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.848114    4588 pod_ready.go:38] duration metric: took 1.1981912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:11:52.848184    4588 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:11:52.860356    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:11:52.887428    4588 system_svc.go:56] duration metric: took 38.3195ms WaitForService to wait for kubelet
	I0610 12:11:52.887428    4588 kubeadm.go:576] duration metric: took 23.0368067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:11:52.887492    4588 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:11:53.045346    4588 request.go:629] Waited for 157.5222ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes
	I0610 12:11:53.045433    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes
	I0610 12:11:53.045433    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:53.045527    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:53.045527    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:53.049939    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:53.049939    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:53.049939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:53.049939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:53 GMT
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Audit-Id: f303c0c3-82b7-4c72-b12a-228fca786f50
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:53.051319    4588 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"642"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9269 chars]
	I0610 12:11:53.051858    4588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:11:53.052041    4588 node_conditions.go:123] node cpu capacity is 2
	I0610 12:11:53.052041    4588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:11:53.052041    4588 node_conditions.go:123] node cpu capacity is 2
	I0610 12:11:53.052041    4588 node_conditions.go:105] duration metric: took 164.5477ms to run NodePressure ...
	I0610 12:11:53.052127    4588 start.go:240] waiting for startup goroutines ...
	I0610 12:11:53.052168    4588 start.go:254] writing updated cluster config ...
	I0610 12:11:53.067074    4588 ssh_runner.go:195] Run: rm -f paused
	I0610 12:11:53.212519    4588 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 12:11:53.217393    4588 out.go:177] * Done! kubectl is now configured to use "multinode-813300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.123513267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235169134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235268934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235298134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235560636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:08:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 12:08:31 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:08:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a0bc6043f7b92f091f4ceee7db3e11617072391c6e5303f4ecdafdb06d4b585a/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.730390719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.730618620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.730710821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.732556631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.765650908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.765730109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.765799609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.766004410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.303731826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.304019627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.304037527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.304223128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:20 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:12:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 12:12:21 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:12:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.074732018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.076936421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.077116521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.077673422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago       Running             busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	f2e39052db195       cbb01a7bd410d                                                                                         9 minutes ago       Running             coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	d32ce22e31b06       6e38f40d628db                                                                                         9 minutes ago       Running             storage-provisioner       0                   a0bc6043f7b92       storage-provisioner
	c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              9 minutes ago       Running             kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	afad8b05897e5       747097150317f                                                                                         9 minutes ago       Running             kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	bd1a6cd987430       a52dc94f0a912                                                                                         9 minutes ago       Running             kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	f1409bf44ff14       25a1387cdab82                                                                                         9 minutes ago       Running             kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	34b9299d74e38       3861cfcd7c04c                                                                                         9 minutes ago       Running             etcd                      0                   a10e49596de5e       etcd-multinode-813300
	ba52603f83875       91be940803172                                                                                         9 minutes ago       Running             kube-apiserver            0                   c7d28a97ba1c4       kube-apiserver-multinode-813300
	
	
	==> coredns [f2e39052db19] <==
	[INFO] 10.244.1.2:46174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001048s
	[INFO] 10.244.0.3:52212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003513s
	[INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	[INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	[INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	[INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	[INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	[INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	[INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	[INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	[INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	[INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	[INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	[INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	[INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	[INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	[INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	[INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	[INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	[INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	[INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	[INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	[INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	[INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	[INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	
	
	==> describe nodes <==
	Name:               multinode-813300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-813300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-813300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-813300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:17:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:12:36 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:12:36 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:12:36 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:12:36 +0000   Mon, 10 Jun 2024 12:08:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.159.171
	  Hostname:    multinode-813300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 04dc333273774adc9b2cebbeee4c799a
	  System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	  Boot ID:                    c2d6ffa5-8803-4682-946d-e778abe2b7af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-z28tq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m24s
	  kube-system                 etcd-multinode-813300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m38s
	  kube-system                 kindnet-29gbv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m25s
	  kube-system                 kube-apiserver-multinode-813300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	  kube-system                 kube-controller-manager-multinode-813300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 kube-proxy-nrpvt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                 kube-scheduler-multinode-813300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m21s  kube-proxy       
	  Normal  Starting                 9m39s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m39s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m39s  kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s  kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s  kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m25s  node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	  Normal  NodeReady                9m9s   kubelet          Node multinode-813300 status is now: NodeReady
	
	
	Name:               multinode-813300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-813300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-813300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-813300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:17:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:17:36 +0000   Mon, 10 Jun 2024 12:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:17:36 +0000   Mon, 10 Jun 2024 12:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:17:36 +0000   Mon, 10 Jun 2024 12:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:17:36 +0000   Mon, 10 Jun 2024 12:11:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.151.128
	  Hostname:    multinode-813300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	  System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	  Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-czxmt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kindnet-r4nfq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m11s
	  kube-system                 kube-proxy-rx2b2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m59s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m11s (x2 over 6m11s)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x2 over 6m11s)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s (x2 over 6m11s)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m10s                  node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	  Normal  NodeReady                5m48s                  kubelet          Node multinode-813300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.208733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun10 12:06] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.196226] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Jun10 12:07] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.123164] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.597831] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.216475] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.252946] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +2.841084] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.239357] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.201793] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.312951] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[ +11.774213] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.120592] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.210672] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.442980] systemd-fstab-generator[1714]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.582828] systemd-fstab-generator[2127]: Ignoring "noauto" option for root device
	[Jun10 12:08] kauditd_printk_skb: 62 callbacks suppressed
	[ +15.292472] systemd-fstab-generator[2331]: Ignoring "noauto" option for root device
	[  +0.227353] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.603365] kauditd_printk_skb: 51 callbacks suppressed
	[Jun10 12:12] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [34b9299d74e3] <==
	{"level":"info","ts":"2024-06-10T12:07:55.149046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgPreVoteResp from 8f4442f54c46fb8d at term 1"}
	{"level":"info","ts":"2024-06-10T12:07:55.149074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became candidate at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.149189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgVoteResp from 8f4442f54c46fb8d at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.14921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.149221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.156121Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.159.171:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T12:07:55.159001Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.159829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T12:07:55.160871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T12:07:55.163364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T12:07:55.165819Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T12:07:55.166021Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.166252Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.166441Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.168652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.159.171:2379"}
	{"level":"info","ts":"2024-06-10T12:07:55.184009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T12:07:57.986982Z","caller":"traceutil/trace.go:171","msg":"trace[314319298] transaction","detail":"{read_only:false; response_revision:57; number_of_response:1; }","duration":"175.967496ms","start":"2024-06-10T12:07:57.811Z","end":"2024-06-10T12:07:57.986968Z","steps":["trace[314319298] 'process raft request'  (duration: 175.915395ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:07:57.985692Z","caller":"traceutil/trace.go:171","msg":"trace[688595595] transaction","detail":"{read_only:false; response_revision:56; number_of_response:1; }","duration":"176.678005ms","start":"2024-06-10T12:07:57.808997Z","end":"2024-06-10T12:07:57.985675Z","steps":["trace[688595595] 'process raft request'  (duration: 167.851999ms)"],"step_count":1}
	2024/06/10 12:08:00 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-10T12:11:45.034472Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.434792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-813300-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-06-10T12:11:45.034652Z","caller":"traceutil/trace.go:171","msg":"trace[1392918931] range","detail":"{range_begin:/registry/minions/multinode-813300-m02; range_end:; response_count:1; response_revision:627; }","duration":"372.686393ms","start":"2024-06-10T12:11:44.66195Z","end":"2024-06-10T12:11:45.034637Z","steps":["trace[1392918931] 'range keys from in-memory index tree'  (duration: 372.300191ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:11:45.034806Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T12:11:44.661936Z","time spent":"372.859294ms","remote":"127.0.0.1:55574","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3173,"request content":"key:\"/registry/minions/multinode-813300-m02\" "}
	{"level":"warn","ts":"2024-06-10T12:11:45.03612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.337283ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18126302413705664155 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-813300\" mod_revision:611 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-813300\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-813300\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-10T12:11:45.038666Z","caller":"traceutil/trace.go:171","msg":"trace[807238633] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"254.838757ms","start":"2024-06-10T12:11:44.783815Z","end":"2024-06-10T12:11:45.038654Z","steps":["trace[807238633] 'process raft request'  (duration: 57.529761ms)","trace[807238633] 'compare'  (duration: 193.138277ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T12:13:49.072922Z","caller":"traceutil/trace.go:171","msg":"trace[78076722] transaction","detail":"{read_only:false; response_revision:782; number_of_response:1; }","duration":"148.070995ms","start":"2024-06-10T12:13:48.924834Z","end":"2024-06-10T12:13:49.072905Z","steps":["trace[78076722] 'process raft request'  (duration: 147.862294ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:17:39 up 11 min,  0 users,  load average: 0.10, 0.24, 0.16
	Linux multinode-813300 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c39d54960e7d] <==
	I0610 12:16:36.130888       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:16:46.145505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:16:46.145897       1 main.go:227] handling current node
	I0610 12:16:46.146067       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:16:46.146083       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:16:56.160466       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:16:56.160571       1 main.go:227] handling current node
	I0610 12:16:56.160586       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:16:56.160594       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:17:06.173930       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:17:06.173977       1 main.go:227] handling current node
	I0610 12:17:06.173992       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:17:06.173999       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:17:16.180797       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:17:16.180971       1 main.go:227] handling current node
	I0610 12:17:16.181005       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:17:16.181031       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:17:26.197081       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:17:26.197184       1 main.go:227] handling current node
	I0610 12:17:26.197201       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:17:26.197210       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:17:36.204586       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:17:36.204700       1 main.go:227] handling current node
	I0610 12:17:36.204716       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:17:36.204725       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ba52603f8387] <==
	I0610 12:07:59.824973       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0610 12:07:59.841370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.159.171]
	I0610 12:07:59.843233       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:07:59.851566       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:08:00.422415       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0610 12:08:00.612432       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0610 12:08:00.612551       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0610 12:08:00.612582       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.8µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0610 12:08:00.613710       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0610 12:08:00.614096       1 timeout.go:142] post-timeout activity - time-elapsed: 1.826019ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0610 12:08:00.723908       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:08:00.768391       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0610 12:08:00.811944       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:08:14.681862       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0610 12:08:15.551635       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0610 12:12:25.854015       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62544: use of closed network connection
	E0610 12:12:26.395729       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62547: use of closed network connection
	E0610 12:12:27.123198       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62549: use of closed network connection
	E0610 12:12:27.655576       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62551: use of closed network connection
	E0610 12:12:28.202693       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62554: use of closed network connection
	E0610 12:12:28.742674       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62556: use of closed network connection
	E0610 12:12:29.738951       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62559: use of closed network connection
	E0610 12:12:40.298395       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62561: use of closed network connection
	E0610 12:12:40.800091       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62563: use of closed network connection
	E0610 12:12:51.330500       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62566: use of closed network connection
	
	
	==> kube-controller-manager [f1409bf44ff1] <==
	I0610 12:08:16.024148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.478301ms"
	I0610 12:08:16.151441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.784808ms"
	I0610 12:08:16.151859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="288.402µs"
	I0610 12:08:16.577624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.03545ms"
	I0610 12:08:16.593339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.556101ms"
	I0610 12:08:16.593508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.3µs"
	I0610 12:08:30.535681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130µs"
	I0610 12:08:30.566310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.4µs"
	I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	
	
	==> kube-proxy [afad8b05897e] <==
	I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd1a6cd98743] <==
	W0610 12:07:58.426795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:13:00 multinode-813300 kubelet[2134]: E0610 12:13:00.916013    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:13:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:13:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:13:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:13:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:14:00 multinode-813300 kubelet[2134]: E0610 12:14:00.921686    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:14:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:14:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:14:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:14:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:15:00 multinode-813300 kubelet[2134]: E0610 12:15:00.915435    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:15:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:15:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:15:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:15:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:16:00 multinode-813300 kubelet[2134]: E0610 12:16:00.916678    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:16:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:16:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:16:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:16:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:17:00 multinode-813300 kubelet[2134]: E0610 12:17:00.916733    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:17:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:17:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:17:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:17:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:17:30.582010    7892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-813300 -n multinode-813300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-813300 -n multinode-813300: (13.1025566s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-813300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (266.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (76.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 status --output json --alsologtostderr
E0610 12:18:17.600217    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-813300 status --output json --alsologtostderr: exit status 2 (38.8046768s)

                                                
                                                
-- stdout --
	[{"Name":"multinode-813300","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-813300-m02","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-813300-m03","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:18:07.938318    6532 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0610 12:18:08.025708    6532 out.go:291] Setting OutFile to fd 840 ...
	I0610 12:18:08.026621    6532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:18:08.027625    6532 out.go:304] Setting ErrFile to fd 612...
	I0610 12:18:08.027625    6532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:18:08.042101    6532 out.go:298] Setting JSON to true
	I0610 12:18:08.042101    6532 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:18:08.042101    6532 notify.go:220] Checking for updates...
	I0610 12:18:08.042974    6532 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:18:08.042974    6532 status.go:255] checking status of multinode-813300 ...
	I0610 12:18:08.043923    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:18:10.408148    6532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:18:10.408505    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:10.408765    6532 status.go:330] multinode-813300 host status = "Running" (err=<nil>)
	I0610 12:18:10.408765    6532 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:18:10.410637    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:18:12.814190    6532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:18:12.814190    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:12.814386    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:18:15.676969    6532 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:18:15.676969    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:15.676969    6532 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:18:15.691614    6532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 12:18:15.691614    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:18:17.978114    6532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:18:17.978114    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:17.978792    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:18:20.762057    6532 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:18:20.762057    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:20.763286    6532 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:18:20.869833    6532 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1777085s)
	I0610 12:18:20.883291    6532 ssh_runner.go:195] Run: systemctl --version
	I0610 12:18:20.910104    6532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:18:20.937774    6532 kubeconfig.go:125] found "multinode-813300" server: "https://172.17.159.171:8443"
	I0610 12:18:20.937774    6532 api_server.go:166] Checking apiserver status ...
	I0610 12:18:20.949349    6532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:18:20.987234    6532 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup
	W0610 12:18:21.005613    6532 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 12:18:21.018000    6532 ssh_runner.go:195] Run: ls
	I0610 12:18:21.025535    6532 api_server.go:253] Checking apiserver healthz at https://172.17.159.171:8443/healthz ...
	I0610 12:18:21.034449    6532 api_server.go:279] https://172.17.159.171:8443/healthz returned 200:
	ok
	I0610 12:18:21.034854    6532 status.go:422] multinode-813300 apiserver status = Running (err=<nil>)
	I0610 12:18:21.034915    6532 status.go:257] multinode-813300 status: &{Name:multinode-813300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 12:18:21.034915    6532 status.go:255] checking status of multinode-813300-m02 ...
	I0610 12:18:21.035684    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:18:23.352889    6532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:18:23.353515    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:23.353515    6532 status.go:330] multinode-813300-m02 host status = "Running" (err=<nil>)
	I0610 12:18:23.353515    6532 host.go:66] Checking if "multinode-813300-m02" exists ...
	I0610 12:18:23.354037    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:18:25.694572    6532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:18:25.694572    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:25.694967    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:18:28.500182    6532 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:18:28.500356    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:28.500356    6532 host.go:66] Checking if "multinode-813300-m02" exists ...
	I0610 12:18:28.513384    6532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 12:18:28.513966    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:18:30.869878    6532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:18:30.869878    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:30.870054    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:18:33.680025    6532 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:18:33.680221    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:33.680285    6532 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:18:33.783459    6532 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2700317s)
	I0610 12:18:33.796278    6532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:18:33.827475    6532 status.go:257] multinode-813300-m02 status: &{Name:multinode-813300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 12:18:33.827667    6532 status.go:255] checking status of multinode-813300-m03 ...
	I0610 12:18:33.828242    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:18:36.157238    6532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:18:36.157238    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:36.157238    6532 status.go:330] multinode-813300-m03 host status = "Running" (err=<nil>)
	I0610 12:18:36.157238    6532 host.go:66] Checking if "multinode-813300-m03" exists ...
	I0610 12:18:36.157916    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:18:38.476833    6532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:18:38.477404    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:38.477543    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:18:41.266647    6532 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:18:41.266647    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:41.266647    6532 host.go:66] Checking if "multinode-813300-m03" exists ...
	I0610 12:18:41.278634    6532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 12:18:41.278634    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:18:43.624873    6532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:18:43.624873    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:43.624975    6532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m03 ).networkadapters[0]).ipaddresses[0]
	I0610 12:18:46.461442    6532 main.go:141] libmachine: [stdout =====>] : 172.17.156.194
	
	I0610 12:18:46.461841    6532 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:18:46.461977    6532 sshutil.go:53] new ssh client: &{IP:172.17.156.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m03\id_rsa Username:docker}
	I0610 12:18:46.560149    6532 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2814724s)
	I0610 12:18:46.572974    6532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:18:46.598078    6532 status.go:257] multinode-813300-m03 status: &{Name:multinode-813300-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-813300 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-813300 -n multinode-813300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-813300 -n multinode-813300: (13.168041s)
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-813300 logs -n 25: (9.1490168s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-314000                           | mount-start-1-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:00 UTC | 10 Jun 24 12:01 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-314000 ssh -- ls                    | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:01 UTC | 10 Jun 24 12:01 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:01 UTC | 10 Jun 24 12:01 UTC |
	| start   | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:01 UTC | 10 Jun 24 12:03 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:03 UTC |                     |
	|         | --profile mount-start-2-314000 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-314000 ssh -- ls                    | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:03 UTC | 10 Jun 24 12:04 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:04 UTC |
	| delete  | -p mount-start-1-314000                           | mount-start-1-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:04 UTC |
	| start   | -p multinode-813300                               | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:11 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- apply -f                   | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- rollout                    | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC |                     |
	|         | busybox-fc5497c4f-czxmt -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC |                     |
	|         | busybox-fc5497c4f-z28tq -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1                         |                      |                   |         |                     |                     |
	| node    | add -p multinode-813300 -v 3                      | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:13 UTC |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 12:04:43
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 12:04:43.867977    4588 out.go:291] Setting OutFile to fd 712 ...
	I0610 12:04:43.868768    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:04:43.868768    4588 out.go:304] Setting ErrFile to fd 776...
	I0610 12:04:43.868768    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:04:43.892667    4588 out.go:298] Setting JSON to false
	I0610 12:04:43.895275    4588 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20972,"bootTime":1718000111,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 12:04:43.895275    4588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 12:04:43.900472    4588 out.go:177] * [multinode-813300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 12:04:43.904368    4588 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:04:43.904368    4588 notify.go:220] Checking for updates...
	I0610 12:04:43.909526    4588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:04:43.912565    4588 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 12:04:43.917533    4588 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:04:43.919941    4588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:04:43.923788    4588 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:04:43.924271    4588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:04:49.675599    4588 out.go:177] * Using the hyperv driver based on user configuration
	I0610 12:04:49.679131    4588 start.go:297] selected driver: hyperv
	I0610 12:04:49.679287    4588 start.go:901] validating driver "hyperv" against <nil>
	I0610 12:04:49.679287    4588 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:04:49.728962    4588 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 12:04:49.730655    4588 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:04:49.730655    4588 cni.go:84] Creating CNI manager for ""
	I0610 12:04:49.730655    4588 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 12:04:49.730655    4588 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 12:04:49.730655    4588 start.go:340] cluster config:
	{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:04:49.730655    4588 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:04:49.735782    4588 out.go:177] * Starting "multinode-813300" primary control-plane node in "multinode-813300" cluster
	I0610 12:04:49.737542    4588 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:04:49.738389    4588 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 12:04:49.738389    4588 cache.go:56] Caching tarball of preloaded images
	I0610 12:04:49.738521    4588 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:04:49.738973    4588 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:04:49.739157    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:04:49.739400    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json: {Name:mke1756b0f63dd0c0eff0216ad43e7c3fc903678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:04:49.740675    4588 start.go:360] acquireMachinesLock for multinode-813300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:04:49.740675    4588 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300"
	I0610 12:04:49.740675    4588 start.go:93] Provisioning new machine with config: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 12:04:49.740675    4588 start.go:125] createHost starting for "" (driver="hyperv")
	I0610 12:04:49.742990    4588 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 12:04:49.744068    4588 start.go:159] libmachine.API.Create for "multinode-813300" (driver="hyperv")
	I0610 12:04:49.744068    4588 client.go:168] LocalClient.Create starting
	I0610 12:04:49.744355    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 12:04:49.744355    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:04:49.745001    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:04:49.745251    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 12:04:49.745288    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:04:49.745537    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:04:49.745648    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 12:04:51.938878    4588 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 12:04:51.938878    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:51.939553    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 12:04:53.807457    4588 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 12:04:53.807457    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:53.808222    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:04:55.393412    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:04:55.393412    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:55.393412    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:04:59.273212    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:04:59.274143    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:04:59.276499    4588 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:04:59.786597    4588 main.go:141] libmachine: Creating SSH key...
	I0610 12:05:00.178242    4588 main.go:141] libmachine: Creating VM...
	I0610 12:05:00.178340    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:05:03.335727    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:05:03.335727    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:03.336442    4588 main.go:141] libmachine: Using switch "Default Switch"
	I0610 12:05:03.336442    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:05:05.206486    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:05:05.206839    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:05.206839    4588 main.go:141] libmachine: Creating VHD
	I0610 12:05:05.206938    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 12:05:09.220962    4588 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D79874B4-719D-480C-BEAA-32F87CD7D741
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 12:05:09.221783    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:09.221783    4588 main.go:141] libmachine: Writing magic tar header
	I0610 12:05:09.221873    4588 main.go:141] libmachine: Writing SSH key tar header
	I0610 12:05:09.231477    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 12:05:12.585103    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:12.585103    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:12.586033    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\disk.vhd' -SizeBytes 20000MB
	I0610 12:05:15.285675    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:15.285675    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:15.285962    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-813300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 12:05:19.111640    4588 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-813300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 12:05:19.111640    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:19.112222    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-813300 -DynamicMemoryEnabled $false
	I0610 12:05:21.531378    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:21.531378    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:21.531378    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-813300 -Count 2
	I0610 12:05:23.889725    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:23.889725    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:23.890596    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-813300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\boot2docker.iso'
	I0610 12:05:26.621094    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:26.621720    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:26.621781    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-813300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\disk.vhd'
	I0610 12:05:29.472370    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:29.472370    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:29.472370    4588 main.go:141] libmachine: Starting VM...
	I0610 12:05:29.473255    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300
	I0610 12:05:32.754805    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:32.754805    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:32.754805    4588 main.go:141] libmachine: Waiting for host to start...
	I0610 12:05:32.754805    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:35.217643    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:35.218086    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:35.218212    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:37.944028    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:37.944028    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:38.950550    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:41.379344    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:41.379344    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:41.380252    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:44.115382    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:44.115382    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:45.121347    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:47.512650    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:47.512650    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:47.513336    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:50.281297    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:50.281297    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:51.289490    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:53.673938    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:53.674570    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:53.674570    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:05:56.397148    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:05:56.398100    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:57.399811    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:05:59.797095    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:05:59.797152    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:05:59.797152    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:02.530578    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:02.530578    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:02.530897    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:04.770192    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:04.770234    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:04.770296    4588 machine.go:94] provisionDockerMachine start ...
	I0610 12:06:04.770296    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:07.058629    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:07.058629    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:07.059046    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:09.847341    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:09.848100    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:09.853806    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:09.864878    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:09.864878    4588 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:06:09.992682    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:06:09.992682    4588 buildroot.go:166] provisioning hostname "multinode-813300"
	I0610 12:06:09.992830    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:12.311800    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:12.311800    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:12.312418    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:15.048157    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:15.048157    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:15.055378    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:15.055541    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:15.055541    4588 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300 && echo "multinode-813300" | sudo tee /etc/hostname
	I0610 12:06:15.227442    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300
	
	I0610 12:06:15.227442    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:17.470385    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:17.470385    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:17.470748    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:20.178259    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:20.178259    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:20.185354    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:20.185738    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:20.185872    4588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:06:20.340364    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:06:20.340364    4588 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:06:20.340507    4588 buildroot.go:174] setting up certificates
	I0610 12:06:20.340593    4588 provision.go:84] configureAuth start
	I0610 12:06:20.340593    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:22.647449    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:22.647770    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:22.647870    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:25.365433    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:25.366134    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:25.366227    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:27.676201    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:27.677237    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:27.677302    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:30.462238    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:30.462450    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:30.462450    4588 provision.go:143] copyHostCerts
	I0610 12:06:30.462450    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:06:30.463207    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:06:30.463207    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:06:30.463939    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:06:30.464777    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:06:30.465582    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:06:30.465582    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:06:30.465582    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:06:30.466886    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:06:30.466886    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:06:30.466886    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:06:30.467429    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:06:30.467908    4588 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300 san=[127.0.0.1 172.17.159.171 localhost minikube multinode-813300]
	I0610 12:06:30.880090    4588 provision.go:177] copyRemoteCerts
	I0610 12:06:30.893142    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:06:30.893241    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:33.157947    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:33.158648    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:33.158648    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:35.872452    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:35.873367    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:35.873367    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:06:35.983936    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0907517s)
	I0610 12:06:35.984059    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:06:35.984539    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:06:36.037427    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:06:36.037713    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 12:06:36.087322    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:06:36.087855    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 12:06:36.138563    4588 provision.go:87] duration metric: took 15.7977809s to configureAuth
	I0610 12:06:36.138653    4588 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:06:36.138819    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:06:36.138819    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:38.411440    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:38.411440    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:38.411440    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:41.131406    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:41.131406    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:41.138066    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:41.138428    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:41.138428    4588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:06:41.270867    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:06:41.270942    4588 buildroot.go:70] root file system type: tmpfs
	I0610 12:06:41.271213    4588 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:06:41.271282    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:43.585535    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:43.585535    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:43.585535    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:46.334256    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:46.334341    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:46.340258    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:46.340937    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:46.340937    4588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:06:46.504832    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:06:46.505009    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:48.805219    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:48.806280    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:48.806423    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:51.509193    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:51.509586    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:51.514228    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:06:51.514228    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:06:51.514228    4588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:06:53.697279    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:06:53.697853    4588 machine.go:97] duration metric: took 48.9265831s to provisionDockerMachine
	I0610 12:06:53.697853    4588 client.go:171] duration metric: took 2m3.9527697s to LocalClient.Create
	I0610 12:06:53.698031    4588 start.go:167] duration metric: took 2m3.9529368s to libmachine.API.Create "multinode-813300"
	I0610 12:06:53.698085    4588 start.go:293] postStartSetup for "multinode-813300" (driver="hyperv")
	I0610 12:06:53.698115    4588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:06:53.710436    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:06:53.710436    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:06:55.966771    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:06:55.966771    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:55.966771    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:06:58.718421    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:06:58.718421    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:06:58.719167    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:06:58.827171    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1166938s)
	I0610 12:06:58.839755    4588 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:06:58.846848    4588 command_runner.go:130] > NAME=Buildroot
	I0610 12:06:58.846848    4588 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:06:58.846848    4588 command_runner.go:130] > ID=buildroot
	I0610 12:06:58.846848    4588 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:06:58.846848    4588 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:06:58.847038    4588 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:06:58.847038    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:06:58.847652    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:06:58.848877    4588 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:06:58.848877    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:06:58.861906    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:06:58.883111    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:06:58.930581    4588 start.go:296] duration metric: took 5.2324233s for postStartSetup
	I0610 12:06:58.932577    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:01.213042    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:01.214102    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:01.214102    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:03.953887    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:03.954621    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:03.954896    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:07:03.957997    4588 start.go:128] duration metric: took 2m14.216153s to createHost
	I0610 12:07:03.957997    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:06.232653    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:06.232653    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:06.232653    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:08.922879    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:08.922879    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:08.928691    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:07:08.928691    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:07:08.928691    4588 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 12:07:09.066125    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718021229.075627913
	
	I0610 12:07:09.066125    4588 fix.go:216] guest clock: 1718021229.075627913
	I0610 12:07:09.066125    4588 fix.go:229] Guest: 2024-06-10 12:07:09.075627913 +0000 UTC Remote: 2024-06-10 12:07:03.9579973 +0000 UTC m=+140.257965001 (delta=5.117630613s)
	I0610 12:07:09.066240    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:11.379014    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:11.379014    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:11.379357    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:14.163833    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:14.163833    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:14.170036    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:07:14.170200    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.159.171 22 <nil> <nil>}
	I0610 12:07:14.170200    4588 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718021229
	I0610 12:07:14.308564    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:07:09 UTC 2024
	
	I0610 12:07:14.308564    4588 fix.go:236] clock set: Mon Jun 10 12:07:09 UTC 2024
	 (err=<nil>)
	I0610 12:07:14.308564    4588 start.go:83] releasing machines lock for "multinode-813300", held for 2m24.5667064s
	I0610 12:07:14.308728    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:16.583361    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:16.583361    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:16.583361    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:19.333520    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:19.334493    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:19.338942    4588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:07:19.339115    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:19.349878    4588 ssh_runner.go:195] Run: cat /version.json
	I0610 12:07:19.349878    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:07:21.705493    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:21.705493    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:21.705493    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:21.736050    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:07:21.736147    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:21.736191    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:07:24.564607    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:24.564844    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:24.564844    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:07:24.595261    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:07:24.595261    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:07:24.596193    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:07:24.730348    4588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:07:24.730492    4588 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3915062s)
	I0610 12:07:24.730492    4588 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 12:07:24.730492    4588 ssh_runner.go:235] Completed: cat /version.json: (5.3805704s)
	I0610 12:07:24.743901    4588 ssh_runner.go:195] Run: systemctl --version
	I0610 12:07:24.755276    4588 command_runner.go:130] > systemd 252 (252)
	I0610 12:07:24.755521    4588 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 12:07:24.768011    4588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:07:24.776306    4588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 12:07:24.777113    4588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:07:24.788496    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:07:24.821922    4588 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:07:24.822097    4588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:07:24.822097    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:07:24.822097    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:07:24.858836    4588 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:07:24.870754    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:07:24.906067    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:07:24.927089    4588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:07:24.939539    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:07:24.975868    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:07:25.012044    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:07:25.051040    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:07:25.093321    4588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:07:25.128698    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:07:25.161844    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:07:25.194094    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:07:25.228546    4588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:07:25.253020    4588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:07:25.266396    4588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:07:25.300773    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:25.529366    4588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:07:25.568641    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:07:25.581890    4588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:07:25.609889    4588 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:07:25.610189    4588 command_runner.go:130] > [Unit]
	I0610 12:07:25.610189    4588 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:07:25.610189    4588 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:07:25.610189    4588 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:07:25.610264    4588 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:07:25.610264    4588 command_runner.go:130] > StartLimitBurst=3
	I0610 12:07:25.610264    4588 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:07:25.610264    4588 command_runner.go:130] > [Service]
	I0610 12:07:25.610323    4588 command_runner.go:130] > Type=notify
	I0610 12:07:25.610323    4588 command_runner.go:130] > Restart=on-failure
	I0610 12:07:25.610323    4588 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:07:25.610381    4588 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:07:25.610381    4588 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:07:25.610381    4588 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:07:25.610460    4588 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:07:25.610460    4588 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:07:25.610460    4588 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:07:25.610541    4588 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:07:25.610541    4588 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:07:25.610541    4588 command_runner.go:130] > ExecStart=
	I0610 12:07:25.610541    4588 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:07:25.610727    4588 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:07:25.610787    4588 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:07:25.610787    4588 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:07:25.610787    4588 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > LimitCORE=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:07:25.610845    4588 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:07:25.610845    4588 command_runner.go:130] > TasksMax=infinity
	I0610 12:07:25.610845    4588 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:07:25.610922    4588 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:07:25.610922    4588 command_runner.go:130] > Delegate=yes
	I0610 12:07:25.610922    4588 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:07:25.610922    4588 command_runner.go:130] > KillMode=process
	I0610 12:07:25.610978    4588 command_runner.go:130] > [Install]
	I0610 12:07:25.610978    4588 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:07:25.624039    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:07:25.661400    4588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:07:25.720292    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:07:25.757987    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:07:25.796201    4588 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:07:25.863195    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:07:25.889245    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:07:25.926689    4588 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:07:25.939863    4588 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:07:25.945195    4588 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:07:25.958144    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:07:25.974980    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:07:26.023598    4588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:07:26.238985    4588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:07:26.451509    4588 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:07:26.451626    4588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:07:26.501126    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:26.701662    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:07:29.249741    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5480592s)
	I0610 12:07:29.262915    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:07:29.301406    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:07:29.341268    4588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:07:29.568906    4588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:07:29.785481    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:29.992495    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:07:30.037215    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:07:30.085524    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:30.300979    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:07:30.418219    4588 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:07:30.432434    4588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:07:30.441630    4588 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:07:30.441768    4588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:07:30.441768    4588 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0610 12:07:30.441768    4588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:07:30.441768    4588 command_runner.go:130] > Access: 2024-06-10 12:07:30.340771420 +0000
	I0610 12:07:30.441768    4588 command_runner.go:130] > Modify: 2024-06-10 12:07:30.340771420 +0000
	I0610 12:07:30.441768    4588 command_runner.go:130] > Change: 2024-06-10 12:07:30.344771436 +0000
	I0610 12:07:30.441768    4588 command_runner.go:130] >  Birth: -
	I0610 12:07:30.441768    4588 start.go:562] Will wait 60s for crictl version
	I0610 12:07:30.453463    4588 ssh_runner.go:195] Run: which crictl
	I0610 12:07:30.460096    4588 command_runner.go:130] > /usr/bin/crictl
	I0610 12:07:30.473201    4588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:07:30.530265    4588 command_runner.go:130] > Version:  0.1.0
	I0610 12:07:30.530298    4588 command_runner.go:130] > RuntimeName:  docker
	I0610 12:07:30.530298    4588 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:07:30.530298    4588 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:07:30.530453    4588 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:07:30.541045    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:07:30.577679    4588 command_runner.go:130] > 26.1.4
	I0610 12:07:30.586938    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:07:30.617216    4588 command_runner.go:130] > 26.1.4
	I0610 12:07:30.622417    4588 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:07:30.622417    4588 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:07:30.626308    4588 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:07:30.629450    4588 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:07:30.629450    4588 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:07:30.643235    4588 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:07:30.649840    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:07:30.670389    4588 kubeadm.go:877] updating cluster {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 12:07:30.670389    4588 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:07:30.679574    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:07:30.702356    4588 docker.go:685] Got preloaded images: 
	I0610 12:07:30.702356    4588 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0610 12:07:30.713877    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 12:07:30.734201    4588 command_runner.go:139] > {"Repositories":{}}
	I0610 12:07:30.745928    4588 ssh_runner.go:195] Run: which lz4
	I0610 12:07:30.752458    4588 command_runner.go:130] > /usr/bin/lz4
	I0610 12:07:30.752458    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 12:07:30.763475    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 12:07:30.769540    4588 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 12:07:30.770227    4588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 12:07:30.770389    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0610 12:07:32.729738    4588 docker.go:649] duration metric: took 1.9762697s to copy over tarball
	I0610 12:07:32.743906    4588 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 12:07:41.714684    4588 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9705398s)
	I0610 12:07:41.714777    4588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 12:07:41.787089    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 12:07:41.807203    4588 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0610 12:07:41.807257    4588 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0610 12:07:41.859157    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:42.090821    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:07:44.907266    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8158182s)
	I0610 12:07:44.919479    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 12:07:44.944175    4588 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 12:07:44.944175    4588 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:07:44.946511    4588 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 12:07:44.946557    4588 cache_images.go:84] Images are preloaded, skipping loading
	I0610 12:07:44.946658    4588 kubeadm.go:928] updating node { 172.17.159.171 8443 v1.30.1 docker true true} ...
	I0610 12:07:44.946933    4588 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.159.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:07:44.956339    4588 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 12:07:44.991381    4588 command_runner.go:130] > cgroupfs
	I0610 12:07:44.992435    4588 cni.go:84] Creating CNI manager for ""
	I0610 12:07:44.992435    4588 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 12:07:44.992435    4588 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 12:07:44.992562    4588 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.159.171 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-813300 NodeName:multinode-813300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.159.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.159.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 12:07:44.992992    4588 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.159.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-813300"
	  kubeletExtraArgs:
	    node-ip: 172.17.159.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.159.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 12:07:45.005272    4588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:07:45.024093    4588 command_runner.go:130] > kubeadm
	I0610 12:07:45.024093    4588 command_runner.go:130] > kubectl
	I0610 12:07:45.024093    4588 command_runner.go:130] > kubelet
	I0610 12:07:45.024093    4588 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 12:07:45.037363    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 12:07:45.055298    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0610 12:07:45.086932    4588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:07:45.118552    4588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0610 12:07:45.162013    4588 ssh_runner.go:195] Run: grep 172.17.159.171	control-plane.minikube.internal$ /etc/hosts
	I0610 12:07:45.168121    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:07:45.202562    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:07:45.425101    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:07:45.455626    4588 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.159.171
	I0610 12:07:45.455626    4588 certs.go:194] generating shared ca certs ...
	I0610 12:07:45.455747    4588 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.456562    4588 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:07:45.456877    4588 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:07:45.457049    4588 certs.go:256] generating profile certs ...
	I0610 12:07:45.457786    4588 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key
	I0610 12:07:45.457868    4588 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.crt with IP's: []
	I0610 12:07:45.708342    4588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.crt ...
	I0610 12:07:45.708342    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.crt: {Name:mk54c1a1cec89ed140bb491b38817a3186ba7310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.709853    4588 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key ...
	I0610 12:07:45.709853    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key: {Name:mkf00743da8bbcad3b010f0cbb5cd0436ce14710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.710226    4588 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887
	I0610 12:07:45.710226    4588 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.159.171]
	I0610 12:07:45.907956    4588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887 ...
	I0610 12:07:45.907956    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887: {Name:mka8c1bb2a2baa00cc0af3681bd930d57ff75330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.909711    4588 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887 ...
	I0610 12:07:45.909711    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887: {Name:mkb18584b7bb3bb732e73307ae39bca648c3c22a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:45.910791    4588 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.e97d4887 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt
	I0610 12:07:45.926670    4588 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.e97d4887 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key
	I0610 12:07:45.927884    4588 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key
	I0610 12:07:45.928002    4588 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt with IP's: []
	I0610 12:07:46.173843    4588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt ...
	I0610 12:07:46.173843    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt: {Name:mkb418cf9d8991e80905755cce3c6f6de1ae9ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:46.174831    4588 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key ...
	I0610 12:07:46.174831    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key: {Name:mk51867a74a39076c910c5b47bfa2ded184ede24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:07:46.175803    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:07:46.175803    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 12:07:46.176809    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 12:07:46.186849    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 12:07:46.187823    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:07:46.187823    4588 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:07:46.187823    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:07:46.187823    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:07:46.188815    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:07:46.188815    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:07:46.188815    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:07:46.188815    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:07:46.188815    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.189810    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.192830    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:07:46.241117    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:07:46.288030    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:07:46.335188    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:07:46.376270    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 12:07:46.423248    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 12:07:46.475484    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 12:07:46.527362    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 12:07:46.576727    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:07:46.624358    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:07:46.675098    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:07:46.722137    4588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 12:07:46.780283    4588 ssh_runner.go:195] Run: openssl version
	I0610 12:07:46.789810    4588 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:07:46.800778    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:07:46.837222    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.844961    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.845084    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.859483    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:07:46.867918    4588 command_runner.go:130] > b5213941
	I0610 12:07:46.882717    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:07:46.919428    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:07:46.952808    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.958882    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.958882    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.971190    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:07:46.980429    4588 command_runner.go:130] > 51391683
	I0610 12:07:46.998007    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:07:47.035525    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:07:47.070284    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.077578    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.078136    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.091592    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:07:47.100124    4588 command_runner.go:130] > 3ec20f2e
	I0610 12:07:47.115904    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:07:47.147726    4588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:07:47.154748    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:07:47.154748    4588 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:07:47.156073    4588 kubeadm.go:391] StartCluster: {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:07:47.164675    4588 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 12:07:47.200694    4588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 12:07:47.220824    4588 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0610 12:07:47.220824    4588 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0610 12:07:47.220824    4588 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0610 12:07:47.236087    4588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 12:07:47.265597    4588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 12:07:47.285573    4588 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:07:47.286023    4588 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:07:47.286107    4588 kubeadm.go:156] found existing configuration files:
	
	I0610 12:07:47.298886    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 12:07:47.316688    4588 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:07:47.317271    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:07:47.332217    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 12:07:47.363611    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 12:07:47.381321    4588 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:07:47.381903    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:07:47.393546    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 12:07:47.423937    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 12:07:47.440026    4588 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:07:47.440026    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:07:47.459787    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 12:07:47.496088    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 12:07:47.517579    4588 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:07:47.517579    4588 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:07:47.528796    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 12:07:47.546992    4588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 12:07:47.980483    4588 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:07:47.980577    4588 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:08:01.301108    4588 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 12:08:01.301202    4588 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0610 12:08:01.301289    4588 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 12:08:01.301289    4588 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 12:08:01.301289    4588 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:08:01.301289    4588 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:08:01.301289    4588 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:08:01.301289    4588 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:08:01.302226    4588 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:08:01.302295    4588 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:08:01.302295    4588 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:08:01.302295    4588 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:08:01.305130    4588 out.go:204]   - Generating certificates and keys ...
	I0610 12:08:01.305388    4588 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 12:08:01.305388    4588 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 12:08:01.305588    4588 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 12:08:01.305588    4588 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 12:08:01.305751    4588 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 12:08:01.305751    4588 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 12:08:01.306003    4588 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0610 12:08:01.306003    4588 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 12:08:01.306299    4588 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 12:08:01.306299    4588 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.306482    4588 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 12:08:01.306482    4588 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0610 12:08:01.307259    4588 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.307345    4588 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-813300] and IPs [172.17.159.171 127.0.0.1 ::1]
	I0610 12:08:01.307672    4588 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 12:08:01.307672    4588 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 12:08:01.307672    4588 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0610 12:08:01.307672    4588 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:08:01.307672    4588 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:08:01.308340    4588 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:08:01.308340    4588 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:08:01.308946    4588 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:08:01.308946    4588 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:08:01.308946    4588 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:08:01.308946    4588 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:08:01.308946    4588 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:08:01.309472    4588 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:08:01.312844    4588 out.go:204]   - Booting up control plane ...
	I0610 12:08:01.312844    4588 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:08:01.313599    4588 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:08:01.313744    4588 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:08:01.313744    4588 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:08:01.313744    4588 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:08:01.313744    4588 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:08:01.314297    4588 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:08:01.314351    4588 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:08:01.314536    4588 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:08:01.314536    4588 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:08:01.314536    4588 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 12:08:01.314536    4588 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 12:08:01.315111    4588 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:08:01.315111    4588 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:08:01.315111    4588 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:08:01.315111    4588 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:08:01.315111    4588 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002683261s
	I0610 12:08:01.315111    4588 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002683261s
	I0610 12:08:01.315111    4588 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:08:01.315111    4588 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:08:01.315955    4588 command_runner.go:130] > [api-check] The API server is healthy after 7.00192s
	I0610 12:08:01.316020    4588 kubeadm.go:309] [api-check] The API server is healthy after 7.00192s
	I0610 12:08:01.316205    4588 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:08:01.316285    4588 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:08:01.316552    4588 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:08:01.316552    4588 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:08:01.316784    4588 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:08:01.316861    4588 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:08:01.317080    4588 kubeadm.go:309] [mark-control-plane] Marking the node multinode-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:08:01.317295    4588 command_runner.go:130] > [mark-control-plane] Marking the node multinode-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:08:01.317406    4588 kubeadm.go:309] [bootstrap-token] Using token: d6w50f.8d5fdo5xwqangh2s
	I0610 12:08:01.317406    4588 command_runner.go:130] > [bootstrap-token] Using token: d6w50f.8d5fdo5xwqangh2s
	I0610 12:08:01.321841    4588 out.go:204]   - Configuring RBAC rules ...
	I0610 12:08:01.322484    4588 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:08:01.322549    4588 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:08:01.322728    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:08:01.322728    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:08:01.323029    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:08:01.323029    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:08:01.323184    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:08:01.323184    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:08:01.323458    4588 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:08:01.323458    4588 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:08:01.323458    4588 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:08:01.323458    4588 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:08:01.323458    4588 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:08:01.323458    4588 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:08:01.323458    4588 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 12:08:01.323458    4588 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 12:08:01.323458    4588 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 12:08:01.323458    4588 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 12:08:01.323458    4588 kubeadm.go:309] 
	I0610 12:08:01.323458    4588 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0610 12:08:01.323458    4588 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 12:08:01.324750    4588 kubeadm.go:309] 
	I0610 12:08:01.324822    4588 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0610 12:08:01.324822    4588 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 12:08:01.324822    4588 kubeadm.go:309] 
	I0610 12:08:01.324822    4588 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0610 12:08:01.324822    4588 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 12:08:01.324822    4588 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:08:01.324822    4588 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:08:01.325344    4588 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:08:01.325383    4588 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:08:01.325383    4588 kubeadm.go:309] 
	I0610 12:08:01.325530    4588 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 12:08:01.325530    4588 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0610 12:08:01.325530    4588 kubeadm.go:309] 
	I0610 12:08:01.325530    4588 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:08:01.325530    4588 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:08:01.325530    4588 kubeadm.go:309] 
	I0610 12:08:01.325530    4588 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 12:08:01.325530    4588 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0610 12:08:01.325530    4588 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:08:01.326068    4588 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:08:01.326160    4588 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:08:01.326160    4588 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:08:01.326160    4588 kubeadm.go:309] 
	I0610 12:08:01.326435    4588 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:08:01.326435    4588 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:08:01.326712    4588 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 12:08:01.326712    4588 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0610 12:08:01.326712    4588 kubeadm.go:309] 
	I0610 12:08:01.327011    4588 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.327011    4588 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.327428    4588 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 \
	I0610 12:08:01.327428    4588 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 \
	I0610 12:08:01.327428    4588 kubeadm.go:309] 	--control-plane 
	I0610 12:08:01.327574    4588 command_runner.go:130] > 	--control-plane 
	I0610 12:08:01.327574    4588 kubeadm.go:309] 
	I0610 12:08:01.327749    4588 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:08:01.327749    4588 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:08:01.327749    4588 kubeadm.go:309] 
	I0610 12:08:01.327914    4588 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.327914    4588 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token d6w50f.8d5fdo5xwqangh2s \
	I0610 12:08:01.328143    4588 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 12:08:01.328143    4588 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 12:08:01.328143    4588 cni.go:84] Creating CNI manager for ""
	I0610 12:08:01.328143    4588 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 12:08:01.330463    4588 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 12:08:01.347784    4588 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 12:08:01.356731    4588 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 12:08:01.356776    4588 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0610 12:08:01.356776    4588 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0610 12:08:01.356776    4588 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 12:08:01.356776    4588 command_runner.go:130] > Access: 2024-06-10 12:05:58.512184000 +0000
	I0610 12:08:01.356776    4588 command_runner.go:130] > Modify: 2024-06-06 15:35:25.000000000 +0000
	I0610 12:08:01.356867    4588 command_runner.go:130] > Change: 2024-06-10 12:05:49.137000000 +0000
	I0610 12:08:01.356867    4588 command_runner.go:130] >  Birth: -
	I0610 12:08:01.356957    4588 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 12:08:01.357012    4588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 12:08:01.407001    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 12:08:01.826713    4588 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0610 12:08:01.826713    4588 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0610 12:08:01.826713    4588 command_runner.go:130] > serviceaccount/kindnet created
	I0610 12:08:01.826713    4588 command_runner.go:130] > daemonset.apps/kindnet created
	I0610 12:08:01.826855    4588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:08:01.841874    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-813300 minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=multinode-813300 minikube.k8s.io/primary=true
	I0610 12:08:01.841874    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:01.858654    4588 command_runner.go:130] > -16
	I0610 12:08:01.858754    4588 ops.go:34] apiserver oom_adj: -16
	I0610 12:08:02.040074    4588 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0610 12:08:02.040074    4588 command_runner.go:130] > node/multinode-813300 labeled
	I0610 12:08:02.055746    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:02.215756    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:02.564403    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:02.693633    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:03.066156    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:03.182182    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:03.552354    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:03.668708    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:04.061778    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:04.182269    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:04.561683    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:04.679824    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:05.065077    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:05.178135    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:05.563037    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:05.683240    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:06.069595    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:06.198551    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:06.567615    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:06.687919    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:07.059024    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:07.199437    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:07.559042    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:07.674044    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:08.065565    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:08.190015    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:08.564648    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:08.688052    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:09.069032    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:09.202107    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:09.560025    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:09.676786    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:10.062974    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:10.186607    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:10.564610    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:10.698529    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:11.060307    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:11.191152    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:11.563418    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:11.690517    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:12.054085    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:12.189950    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:12.562729    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:12.677893    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:13.067953    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:13.195579    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:13.558883    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:13.682493    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:14.061302    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:14.183257    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:14.567678    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:14.763665    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:15.056289    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:15.186893    4588 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 12:08:15.564117    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:08:15.696782    4588 command_runner.go:130] > NAME      SECRETS   AGE
	I0610 12:08:15.696824    4588 command_runner.go:130] > default   0         0s
	I0610 12:08:15.696888    4588 kubeadm.go:1107] duration metric: took 13.8699211s to wait for elevateKubeSystemPrivileges
	W0610 12:08:15.696888    4588 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 12:08:15.696888    4588 kubeadm.go:393] duration metric: took 28.5406976s to StartCluster
	I0610 12:08:15.696888    4588 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:08:15.696888    4588 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:15.699411    4588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:08:15.700711    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 12:08:15.700711    4588 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 12:08:15.704964    4588 out.go:177] * Verifying Kubernetes components...
	I0610 12:08:15.700711    4588 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:08:15.701382    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:08:15.707565    4588 addons.go:69] Setting storage-provisioner=true in profile "multinode-813300"
	I0610 12:08:15.707565    4588 addons.go:69] Setting default-storageclass=true in profile "multinode-813300"
	I0610 12:08:15.707565    4588 addons.go:234] Setting addon storage-provisioner=true in "multinode-813300"
	I0610 12:08:15.707565    4588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-813300"
	I0610 12:08:15.707565    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:08:15.708184    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:15.709164    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:15.721781    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:08:16.014416    4588 command_runner.go:130] > apiVersion: v1
	I0610 12:08:16.014416    4588 command_runner.go:130] > data:
	I0610 12:08:16.014416    4588 command_runner.go:130] >   Corefile: |
	I0610 12:08:16.014416    4588 command_runner.go:130] >     .:53 {
	I0610 12:08:16.014416    4588 command_runner.go:130] >         errors
	I0610 12:08:16.014416    4588 command_runner.go:130] >         health {
	I0610 12:08:16.014416    4588 command_runner.go:130] >            lameduck 5s
	I0610 12:08:16.014416    4588 command_runner.go:130] >         }
	I0610 12:08:16.014416    4588 command_runner.go:130] >         ready
	I0610 12:08:16.014416    4588 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0610 12:08:16.014416    4588 command_runner.go:130] >            pods insecure
	I0610 12:08:16.014416    4588 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0610 12:08:16.014416    4588 command_runner.go:130] >            ttl 30
	I0610 12:08:16.014416    4588 command_runner.go:130] >         }
	I0610 12:08:16.014416    4588 command_runner.go:130] >         prometheus :9153
	I0610 12:08:16.014416    4588 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0610 12:08:16.014416    4588 command_runner.go:130] >            max_concurrent 1000
	I0610 12:08:16.014416    4588 command_runner.go:130] >         }
	I0610 12:08:16.014416    4588 command_runner.go:130] >         cache 30
	I0610 12:08:16.014416    4588 command_runner.go:130] >         loop
	I0610 12:08:16.014416    4588 command_runner.go:130] >         reload
	I0610 12:08:16.014416    4588 command_runner.go:130] >         loadbalance
	I0610 12:08:16.014416    4588 command_runner.go:130] >     }
	I0610 12:08:16.014416    4588 command_runner.go:130] > kind: ConfigMap
	I0610 12:08:16.014416    4588 command_runner.go:130] > metadata:
	I0610 12:08:16.014416    4588 command_runner.go:130] >   creationTimestamp: "2024-06-10T12:08:00Z"
	I0610 12:08:16.014416    4588 command_runner.go:130] >   name: coredns
	I0610 12:08:16.014416    4588 command_runner.go:130] >   namespace: kube-system
	I0610 12:08:16.014416    4588 command_runner.go:130] >   resourceVersion: "223"
	I0610 12:08:16.014416    4588 command_runner.go:130] >   uid: 6b6b1b18-8340-404c-ad83-066f280bc1b8
	I0610 12:08:16.014416    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 12:08:16.117425    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:08:16.455420    4588 command_runner.go:130] > configmap/coredns replaced
	I0610 12:08:16.455504    4588 start.go:946] {"host.minikube.internal": 172.17.144.1} host record injected into CoreDNS's ConfigMap
	I0610 12:08:16.457151    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:16.457151    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:16.457851    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:08:16.457851    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:08:16.459915    4588 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 12:08:16.460479    4588 node_ready.go:35] waiting up to 6m0s for node "multinode-813300" to be "Ready" ...
	I0610 12:08:16.460479    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:16.460479    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.460479    4588 round_trippers.go:463] GET https://172.17.159.171:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 12:08:16.460479    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.460479    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.460479    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.460479    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.460479    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.477494    4588 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0610 12:08:16.477494    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Audit-Id: 5d9cb475-9eb4-490b-84cb-48947c853346
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.477690    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.477690    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Content-Length: 291
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.477690    4588 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"362","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.477690    4588 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0610 12:08:16.477690    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.477690    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.478258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.478258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.478258    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.478258    4588 round_trippers.go:580]     Audit-Id: a0a248f5-f010-49bd-be88-f9ce21911653
	I0610 12:08:16.478258    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.478536    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:16.478622    4588 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"362","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.478747    4588 round_trippers.go:463] PUT https://172.17.159.171:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 12:08:16.478747    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.478747    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.478747    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.478747    4588 round_trippers.go:473]     Content-Type: application/json
	I0610 12:08:16.494772    4588 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0610 12:08:16.495065    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.495065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Content-Length: 291
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Audit-Id: d535bcf1-d6e3-4914-8855-21dc33661312
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.495065    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.495065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.495137    4588 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"364","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.969579    4588 round_trippers.go:463] GET https://172.17.159.171:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 12:08:16.969579    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.969579    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.969579    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:16.969579    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:16.969579    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:16.969579    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.969579    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:16.973208    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:16.973208    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.973208    4588 round_trippers.go:580]     Audit-Id: 72e9d5e3-bcfa-467a-b56b-e353a5261918
	I0610 12:08:16.973208    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.973208    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.973208    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.973665    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.973665    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.973920    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:16.973920    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:16.974025    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:16.974025    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:16.974025    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:16.974124    4588 round_trippers.go:580]     Content-Length: 291
	I0610 12:08:16.974025    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:16.974124    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:16 GMT
	I0610 12:08:16.974257    4588 round_trippers.go:580]     Audit-Id: 606c7d1b-8607-486b-901e-1a37f0e7b82a
	I0610 12:08:16.974334    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:16.974445    4588 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0b22d4e8-32b6-4380-8951-181e154eb37c","resourceVersion":"374","creationTimestamp":"2024-06-10T12:08:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0610 12:08:16.974850    4588 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-813300" context rescaled to 1 replicas
	I0610 12:08:17.461815    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:17.461815    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:17.461815    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:17.461815    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:17.466181    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:17.466181    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:17.466181    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:17.466624    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:17.466624    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:17.466624    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:17 GMT
	I0610 12:08:17.466624    4588 round_trippers.go:580]     Audit-Id: f25e967e-f2a6-43d3-b020-a71c67099236
	I0610 12:08:17.466624    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:17.466865    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:17.969784    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:17.969784    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:17.969784    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:17.969784    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:17.973880    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:17.974417    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:17.974417    4588 round_trippers.go:580]     Audit-Id: b880d804-4a72-46ac-a1eb-64811f820ef2
	I0610 12:08:17.974417    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:17.974417    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:17.974505    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:17.974505    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:17.974505    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:17 GMT
	I0610 12:08:17.974850    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:18.148749    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:18.148749    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:18.151774    4588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:08:18.148749    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:18.155349    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:18.155349    4588 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:08:18.155349    4588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 12:08:18.155349    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:18.155769    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:08:18.156778    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:08:18.157762    4588 addons.go:234] Setting addon default-storageclass=true in "multinode-813300"
	I0610 12:08:18.157762    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:08:18.158791    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:18.463954    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:18.464224    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:18.464224    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:18.464224    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:18.468817    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:18.468866    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:18.468866    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:18.468866    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:18 GMT
	I0610 12:08:18.468866    4588 round_trippers.go:580]     Audit-Id: 08ba8b87-2ebe-4b1a-9bc7-7fc5017e34d1
	I0610 12:08:18.469449    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:18.469798    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:18.972076    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:18.972076    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:18.972076    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:18.972076    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:18.975651    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:18.975651    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:18.976021    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:18.976021    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:18 GMT
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Audit-Id: 9c65fa4d-0b55-4681-a48a-3b1a4dbb54ce
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:18.976021    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:18.976441    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:19.462801    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:19.462801    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:19.462801    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:19.462801    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:19.466510    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:19.466510    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Audit-Id: 71bb3ada-5b1d-4303-8b49-627cb8297316
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:19.467506    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:19.467506    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:19.467506    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:19 GMT
	I0610 12:08:19.467506    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:19.971420    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:19.971420    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:19.971517    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:19.971517    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:19.974973    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:19.974973    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:19.974973    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:19.974973    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:19.975460    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:19.975460    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:19 GMT
	I0610 12:08:19.975460    4588 round_trippers.go:580]     Audit-Id: 8cd747ea-2235-458e-8465-b8e6dd798dc6
	I0610 12:08:19.975460    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:19.975966    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:20.464847    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:20.465278    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:20.465387    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:20.465387    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:20.469653    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:20.469653    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:20.469653    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:20.469653    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:20 GMT
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Audit-Id: 77e2b9d7-6f2e-498f-b2b6-39850d5cf023
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:20.469653    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:20.470875    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:20.471154    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:20.673653    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:20.673741    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:20.673874    4588 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 12:08:20.673874    4588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 12:08:20.673943    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:08:20.675325    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:20.675325    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:20.675325    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:08:20.971415    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:20.971628    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:20.971628    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:20.971628    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:20.977135    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:20.977726    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Audit-Id: 85b2432c-b255-446d-91a8-0de43d9b76ca
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:20.977726    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:20.977726    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:20.977726    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:20 GMT
	I0610 12:08:20.978131    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:21.462028    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:21.462138    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:21.462213    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:21.462213    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:21.465088    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:21.465888    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:21.465888    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:21.465888    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:21.465888    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:21.466013    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:21.466013    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:21 GMT
	I0610 12:08:21.466013    4588 round_trippers.go:580]     Audit-Id: 8e7bfa2d-47b3-45cc-a081-3540ba8a26c7
	I0610 12:08:21.466463    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:21.972657    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:21.972657    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:21.972657    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:21.972657    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:21.977058    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:21.977058    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:21.977134    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:21.977134    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:21.977134    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:21.977134    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:21.977218    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:21 GMT
	I0610 12:08:21.977218    4588 round_trippers.go:580]     Audit-Id: 09c72934-2b71-461a-b4fd-0e14aaaf73b0
	I0610 12:08:21.977477    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:22.465513    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:22.465513    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:22.465581    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:22.465581    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:22.468907    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:22.468907    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:22.468907    4588 round_trippers.go:580]     Audit-Id: 046c63ca-5191-4136-ba48-0368a7e8d11c
	I0610 12:08:22.468907    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:22.468907    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:22.469891    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:22.469891    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:22.469891    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:22 GMT
	I0610 12:08:22.469891    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:22.972701    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:22.972701    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:22.972701    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:22.972701    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:22.976321    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:22.976321    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:22.976321    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:22.976321    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:22 GMT
	I0610 12:08:22.976321    4588 round_trippers.go:580]     Audit-Id: cbf84943-c01b-45e1-b8d0-c6fbf9f578a4
	I0610 12:08:22.977441    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:22.977790    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:23.167919    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:08:23.168510    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:23.168510    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:08:23.467192    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:23.467263    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:23.467263    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:23.467263    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:23.470722    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:23.471197    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:23.471197    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:23 GMT
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Audit-Id: 15d64748-9238-483a-8170-ffc83f1d908d
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:23.471197    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:23.471197    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:23.471538    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:23.612259    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:08:23.612340    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:23.612790    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:08:23.770726    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:08:23.973469    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:24.067126    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:24.067126    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:24.067126    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:24.071456    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:24.071456    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Audit-Id: 3f7761c1-775f-479a-926e-e6e225ae5297
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:24.071456    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:24.071456    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:24.071456    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:24 GMT
	I0610 12:08:24.071917    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:24.381409    4588 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 12:08:24.381500    4588 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0610 12:08:24.381600    4588 command_runner.go:130] > pod/storage-provisioner created
	I0610 12:08:24.466424    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:24.466616    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:24.466616    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:24.466616    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:24.469640    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:24.471213    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Audit-Id: 644ee470-8778-4b97-ade1-3d396880a3eb
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:24.471250    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:24.471250    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:24.471250    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:24 GMT
	I0610 12:08:24.471668    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:24.975984    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:24.975984    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:24.976290    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:24.976290    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:24.979743    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:24.979743    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Audit-Id: 577d1627-ffbf-4769-b31e-54336e194420
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:24.979743    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:24.979743    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:24.979743    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:24 GMT
	I0610 12:08:24.980589    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:24.981314    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:25.467082    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:25.467082    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:25.467082    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:25.467405    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:25.471429    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:25.471429    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:25.471429    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:25.471429    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:25 GMT
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Audit-Id: b22a7791-024c-48c8-a3d0-60f86c7bd039
	I0610 12:08:25.471429    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:25.471826    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:25.970625    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:25.970625    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:25.970625    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:25.970625    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:25.975518    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:25.975586    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:25.975586    4588 round_trippers.go:580]     Audit-Id: d71153d6-4e44-462d-ae60-2161aced6f71
	I0610 12:08:25.975586    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:25.975668    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:25.975668    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:25.975668    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:25.975668    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:25 GMT
	I0610 12:08:25.975668    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:26.019285    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:08:26.019893    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:26.020248    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:08:26.163944    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 12:08:26.337920    4588 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0610 12:08:26.338319    4588 round_trippers.go:463] GET https://172.17.159.171:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 12:08:26.338580    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.338580    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.338704    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.349001    4588 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 12:08:26.350011    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.350011    4588 round_trippers.go:580]     Audit-Id: 6617c405-50a5-4bfc-aadb-527dd013680d
	I0610 12:08:26.350011    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.350011    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.350063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.350063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.350063    4588 round_trippers.go:580]     Content-Length: 1273
	I0610 12:08:26.350063    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.350188    4588 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"3c2bb998-bd12-48de-88bb-ef852d4ef17b","resourceVersion":"402","creationTimestamp":"2024-06-10T12:08:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T12:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0610 12:08:26.351049    4588 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3c2bb998-bd12-48de-88bb-ef852d4ef17b","resourceVersion":"402","creationTimestamp":"2024-06-10T12:08:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T12:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 12:08:26.351165    4588 round_trippers.go:463] PUT https://172.17.159.171:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 12:08:26.351165    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.351165    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.351165    4588 round_trippers.go:473]     Content-Type: application/json
	I0610 12:08:26.351231    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.354220    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:26.354220    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Audit-Id: 3328ace5-f8a9-432f-95d6-2e022f2f96ba
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.354220    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.355159    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.355159    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.355159    4588 round_trippers.go:580]     Content-Length: 1220
	I0610 12:08:26.355159    4588 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3c2bb998-bd12-48de-88bb-ef852d4ef17b","resourceVersion":"402","creationTimestamp":"2024-06-10T12:08:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-10T12:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 12:08:26.359449    4588 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 12:08:26.362054    4588 addons.go:510] duration metric: took 10.6612568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0610 12:08:26.472340    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:26.472340    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.472340    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.472340    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.476989    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:26.476989    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.476989    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.476989    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.477437    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.477437    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.477437    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.477437    4588 round_trippers.go:580]     Audit-Id: 2d7eac79-25bf-4e84-bec6-871d0084a72d
	I0610 12:08:26.477671    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:26.973673    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:26.973888    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:26.973888    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:26.973888    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:26.977273    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:26.977273    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:26.977273    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:26.977273    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:26 GMT
	I0610 12:08:26.977273    4588 round_trippers.go:580]     Audit-Id: 4981fd01-235e-4c9f-9367-3a7de9313d0e
	I0610 12:08:26.977273    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:26.978045    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:26.978045    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:26.978205    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:27.462245    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:27.462245    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:27.462245    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:27.462340    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:27.467699    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:27.467699    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:27.467825    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:27.467825    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:27 GMT
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Audit-Id: c9d0a77d-a57e-4d70-84a2-e398f5ffa765
	I0610 12:08:27.467825    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:27.468099    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:27.469115    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:27.960920    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:27.960920    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:27.960920    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:27.960920    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:27.965654    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:27.965654    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:27.965654    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:27.965654    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:27.965654    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:27.966150    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:27 GMT
	I0610 12:08:27.966150    4588 round_trippers.go:580]     Audit-Id: 5d8201db-b32c-4acf-8ad6-345335bd6d2d
	I0610 12:08:27.966150    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:27.966354    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:28.474445    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:28.474445    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:28.474445    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:28.474445    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:28.482343    4588 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:08:28.482431    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:28.482431    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:28.482431    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:28 GMT
	I0610 12:08:28.482431    4588 round_trippers.go:580]     Audit-Id: 31e80831-1c73-4c80-b784-0f1dce4ba371
	I0610 12:08:28.482431    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:28.961355    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:28.961600    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:28.961600    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:28.961600    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:28.965419    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:28.965419    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Audit-Id: 0bdf6c06-0223-405f-8706-dfbe77e36c8b
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:28.965419    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:28.965419    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:28.965419    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:28 GMT
	I0610 12:08:28.966753    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:29.464161    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:29.464216    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:29.464216    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:29.464216    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:29.468789    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:29.468789    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:29 GMT
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Audit-Id: a565b77c-b1b9-4089-8623-2c276f67440d
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:29.468789    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:29.469063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:29.469063    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:29.469412    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:29.469971    4588 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:08:29.962498    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:29.962498    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:29.962498    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:29.962498    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:29.967420    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:29.967881    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:29.967881    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:29.967881    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:29 GMT
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Audit-Id: 16709f5b-fb80-40b1-a6e2-9fdc0e2c33b6
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:29.967881    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:29.967881    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:30.466094    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:30.466389    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.466389    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.466451    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.473102    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:08:30.473102    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.473102    4588 round_trippers.go:580]     Audit-Id: b7b3666c-e49c-4427-9cde-6abd578e055f
	I0610 12:08:30.473102    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.473102    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.473376    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.473376    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.473376    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:30 GMT
	I0610 12:08:30.473554    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"340","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0610 12:08:30.971452    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:30.971452    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.971586    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.971586    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.974265    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:30.974265    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.975194    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.975194    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:30 GMT
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Audit-Id: ee0e8e0f-291b-4fd4-a42f-a1ec6d75fd51
	I0610 12:08:30.975194    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.975506    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"407","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0610 12:08:30.975734    4588 node_ready.go:49] node "multinode-813300" has status "Ready":"True"
	I0610 12:08:30.975734    4588 node_ready.go:38] duration metric: took 14.5151365s for node "multinode-813300" to be "Ready" ...
	I0610 12:08:30.975734    4588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:08:30.975734    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:30.975734    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.975734    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.975734    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.981306    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:30.981425    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.981425    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:30 GMT
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Audit-Id: 938fb101-b66e-4d12-9cf6-8a418d730def
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.981425    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.981425    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.982695    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0610 12:08:30.987017    4588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:30.987017    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:30.987017    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.987017    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.987017    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.991014    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:30.991014    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.991014    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.991014    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.991014    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.991014    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.991650    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:30.991650    4588 round_trippers.go:580]     Audit-Id: a20fe82f-5987-467b-a829-238d7f03bb9d
	I0610 12:08:30.992127    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:30.992583    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:30.992583    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:30.992583    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:30.992583    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:30.995139    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:30.995139    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:30.995139    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:30.995139    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:30.995139    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:30.995139    4588 round_trippers.go:580]     Audit-Id: 67280c1b-dd0e-4dd1-adff-518782aaded3
	I0610 12:08:30.995139    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:30.995736    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:30.995736    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"407","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0610 12:08:31.497373    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:31.497442    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.497442    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.497503    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:31.500007    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:31.500007    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Audit-Id: a1858d6a-493d-4307-88c5-562319ac0e90
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:31.500951    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:31.500951    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:31.500951    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:31.504473    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:31.505489    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:31.505489    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.505489    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.505489    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:31.511925    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:08:31.512084    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Audit-Id: 75267635-50fe-4afc-8272-36f1623fe090
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:31.512084    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:31.512084    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:31.512084    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:31 GMT
	I0610 12:08:31.512456    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:31.989543    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:31.989543    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.989543    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.989543    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:31.993664    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:31.993817    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Audit-Id: ad39aa17-cc09-4f93-bf6b-cdc9adb39955
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:31.993817    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:31.993817    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:31.993817    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:31.996841    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:31.997224    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:31.997758    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:31.997758    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:31.997758    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.002165    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:32.002165    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.002165    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:32.002165    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:32.002711    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:32.002711    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:32.002711    4588 round_trippers.go:580]     Audit-Id: e13fc67f-b777-4f9b-abfd-1f1127f85080
	I0610 12:08:32.002711    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.002926    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:32.495322    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:32.495503    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:32.495503    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:32.495503    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.499334    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:32.499334    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.499334    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:32.499334    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Audit-Id: 750ca129-89cc-4b31-978b-eb45c8205826
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.499334    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:32.500108    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"411","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0610 12:08:32.500884    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:32.500884    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:32.500939    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:32.500939    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.505349    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:32.505349    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.505349    4588 round_trippers.go:580]     Audit-Id: 4bd7a5b6-e799-44ba-b894-becda2bbf011
	I0610 12:08:32.505349    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.505349    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:32.505349    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:32.505349    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:32.505887    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:32 GMT
	I0610 12:08:32.506152    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:32.995187    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:08:32.995187    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:32.995187    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:32.995187    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:32.999219    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:32.999219    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Audit-Id: 58a497a3-7bd3-4807-989d-93a7abd2266d
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:32.999219    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.000085    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.000085    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.000226    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0610 12:08:33.001482    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.001482    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.001482    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.001482    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.004802    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:33.004802    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.004802    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.004802    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.004802    4588 round_trippers.go:580]     Audit-Id: b39df611-6465-4b74-a9a3-b939651b43fe
	I0610 12:08:33.004802    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.005828    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.005974    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.006340    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.006877    4588 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.006877    4588 pod_ready.go:81] duration metric: took 2.0198434s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.006932    4588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.007046    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:08:33.007046    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.007046    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.007094    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.009577    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.009577    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.009577    4588 round_trippers.go:580]     Audit-Id: 76096531-167d-4f83-bd03-e7713e1e8d9d
	I0610 12:08:33.009577    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.009577    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.010082    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.010082    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.010082    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.010082    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"e48af956-8533-4b8e-be5d-0834484cbffa","resourceVersion":"385","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.159.171:2379","kubernetes.io/config.hash":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.mirror":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.seen":"2024-06-10T12:08:00.781973961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0610 12:08:33.010556    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.010556    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.010556    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.010556    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.013440    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.013440    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.013440    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.013440    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Audit-Id: 6b040327-de96-49d5-8e30-1c94f19e6445
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.013440    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.014281    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.014698    4588 pod_ready.go:92] pod "etcd-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.014698    4588 pod_ready.go:81] duration metric: took 7.7654ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.014760    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.014878    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:08:33.014878    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.014908    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.014908    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.019251    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:33.019385    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.019385    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.019385    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.019385    4588 round_trippers.go:580]     Audit-Id: a56c64cd-4b78-4ec4-b317-d23c5bd91346
	I0610 12:08:33.019916    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"f824b391-b3d2-49ec-ba7d-863cb2150f81","resourceVersion":"386","creationTimestamp":"2024-06-10T12:07:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.159.171:8443","kubernetes.io/config.hash":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.mirror":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.seen":"2024-06-10T12:07:52.425003820Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:07:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0610 12:08:33.020589    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.020695    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.020695    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.020695    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.024226    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:33.024226    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Audit-Id: ba42cb6f-0b20-475d-81bb-08c0c2b424c1
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.024226    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.024226    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.024226    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.024787    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.025075    4588 pod_ready.go:92] pod "kube-apiserver-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.025075    4588 pod_ready.go:81] duration metric: took 10.3143ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.025075    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.025075    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:08:33.025075    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.025075    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.025075    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.027688    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.027688    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Audit-Id: 627bf56d-7d78-4898-b65b-7e67c35b4b59
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.027688    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.027688    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.027688    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.028800    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"384","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0610 12:08:33.029481    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.029481    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.029481    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.029481    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.031724    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.031724    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Audit-Id: 545f7fb9-5389-46a1-9ca7-54eea814ce0e
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.031724    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.031724    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.031724    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.032537    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.033863    4588 pod_ready.go:92] pod "kube-controller-manager-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.034008    4588 pod_ready.go:81] duration metric: took 8.9332ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.034008    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.034008    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:08:33.034008    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.034008    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.034229    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.036496    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.036496    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.036496    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Audit-Id: 711cf59f-d3e3-4f21-a5db-187fe7f58c13
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.036496    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.036496    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.036496    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"380","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0610 12:08:33.037906    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.037952    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.038071    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.038071    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.040362    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.040362    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.040362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Audit-Id: e6d94a88-bd9f-4626-b1c3-879d50c77dd8
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.040362    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.040362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.041393    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.041808    4588 pod_ready.go:92] pod "kube-proxy-nrpvt" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.041808    4588 pod_ready.go:81] duration metric: took 7.8004ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.041877    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.195916    4588 request.go:629] Waited for 154.0375ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:08:33.196165    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:08:33.196165    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.196165    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.196232    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.202934    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:08:33.203372    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.203372    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.203372    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.203372    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.203439    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.203439    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.203439    4588 round_trippers.go:580]     Audit-Id: 3370c09f-361f-45e5-a7c2-7da8cdbd9831
	I0610 12:08:33.203622    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"387","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0610 12:08:33.400282    4588 request.go:629] Waited for 195.7136ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.400649    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:08:33.400673    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.400673    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.400673    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.403562    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:08:33.403562    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Audit-Id: dec8d733-a395-4375-9e53-c5161847aeac
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.403562    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.403562    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.403562    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.404668    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:08:33.405082    4588 pod_ready.go:92] pod "kube-scheduler-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:08:33.405082    4588 pod_ready.go:81] duration metric: took 363.2018ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:08:33.405082    4588 pod_ready.go:38] duration metric: took 2.4293279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:08:33.405082    4588 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:08:33.419788    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:08:33.450414    4588 command_runner.go:130] > 1957
	I0610 12:08:33.450668    4588 api_server.go:72] duration metric: took 17.7498125s to wait for apiserver process to appear ...
	I0610 12:08:33.450668    4588 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:08:33.450668    4588 api_server.go:253] Checking apiserver healthz at https://172.17.159.171:8443/healthz ...
	I0610 12:08:33.458286    4588 api_server.go:279] https://172.17.159.171:8443/healthz returned 200:
	ok
	I0610 12:08:33.458286    4588 round_trippers.go:463] GET https://172.17.159.171:8443/version
	I0610 12:08:33.458286    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.458286    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.458286    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.462485    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:33.462485    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.462485    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.462485    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Content-Length: 263
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Audit-Id: 16c16afd-0fbc-487c-ad2f-457898147096
	I0610 12:08:33.462485    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.463107    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.463107    4588 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 12:08:33.463254    4588 api_server.go:141] control plane version: v1.30.1
	I0610 12:08:33.463254    4588 api_server.go:131] duration metric: took 12.5864ms to wait for apiserver health ...
	I0610 12:08:33.463316    4588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:08:33.605309    4588 request.go:629] Waited for 141.9539ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:33.605546    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:33.605546    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.605546    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.605546    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.611373    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:33.612010    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.612010    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.612080    4588 round_trippers.go:580]     Audit-Id: 8601551f-3309-4d3c-a243-c54f622ba627
	I0610 12:08:33.612080    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.612080    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.612080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.612080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.613396    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0610 12:08:33.616317    4588 system_pods.go:59] 8 kube-system pods found
	I0610 12:08:33.616317    4588 system_pods.go:61] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "etcd-multinode-813300" [e48af956-8533-4b8e-be5d-0834484cbffa] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-apiserver-multinode-813300" [f824b391-b3d2-49ec-ba7d-863cb2150f81] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:08:33.616317    4588 system_pods.go:61] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:08:33.616317    4588 system_pods.go:74] duration metric: took 153.0001ms to wait for pod list to return data ...
	I0610 12:08:33.616317    4588 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:08:33.808138    4588 request.go:629] Waited for 191.1567ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/default/serviceaccounts
	I0610 12:08:33.808225    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/default/serviceaccounts
	I0610 12:08:33.808225    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:33.808225    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:33.808225    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:33.813003    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:08:33.813365    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Content-Length: 261
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:33 GMT
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Audit-Id: 53fadb3a-0bcd-4518-aaa6-0171143260ed
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:33.813365    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:33.813365    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:33.813365    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:33.813459    4588 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2033967b-ff48-4641-b518-45705bf023c6","resourceVersion":"336","creationTimestamp":"2024-06-10T12:08:15Z"}}]}
	I0610 12:08:33.813646    4588 default_sa.go:45] found service account: "default"
	I0610 12:08:33.813646    4588 default_sa.go:55] duration metric: took 197.3272ms for default service account to be created ...
	I0610 12:08:33.813646    4588 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:08:34.013591    4588 request.go:629] Waited for 199.9428ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:34.013591    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:08:34.013591    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:34.013591    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:34.013591    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:34.019566    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:08:34.019566    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Audit-Id: ccddedc7-4912-4f64-a5db-e857ae601e77
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:34.019566    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:34.019566    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:34.019566    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:34 GMT
	I0610 12:08:34.022328    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0610 12:08:34.025311    4588 system_pods.go:86] 8 kube-system pods found
	I0610 12:08:34.025311    4588 system_pods.go:89] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "etcd-multinode-813300" [e48af956-8533-4b8e-be5d-0834484cbffa] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-apiserver-multinode-813300" [f824b391-b3d2-49ec-ba7d-863cb2150f81] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:08:34.025447    4588 system_pods.go:89] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:08:34.025447    4588 system_pods.go:126] duration metric: took 211.7988ms to wait for k8s-apps to be running ...
	I0610 12:08:34.025531    4588 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:08:34.036640    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:08:34.068920    4588 system_svc.go:56] duration metric: took 43.0864ms WaitForService to wait for kubelet
	I0610 12:08:34.068920    4588 kubeadm.go:576] duration metric: took 18.3680596s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:08:34.068920    4588 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:08:34.200619    4588 request.go:629] Waited for 131.5276ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes
	I0610 12:08:34.200701    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes
	I0610 12:08:34.200763    4588 round_trippers.go:469] Request Headers:
	I0610 12:08:34.200763    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:08:34.200763    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:08:34.204676    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:08:34.204676    4588 round_trippers.go:577] Response Headers:
	I0610 12:08:34.204676    4588 round_trippers.go:580]     Audit-Id: f224ea65-0cb9-4a1e-8a42-23d61494a02a
	I0610 12:08:34.204676    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:08:34.205255    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:08:34.205255    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:08:34.205255    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:08:34.205255    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:08:34 GMT
	I0610 12:08:34.205556    4588 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5013 chars]
	I0610 12:08:34.206165    4588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:08:34.206219    4588 node_conditions.go:123] node cpu capacity is 2
	I0610 12:08:34.206219    4588 node_conditions.go:105] duration metric: took 137.298ms to run NodePressure ...
	I0610 12:08:34.206273    4588 start.go:240] waiting for startup goroutines ...
	I0610 12:08:34.206302    4588 start.go:245] waiting for cluster config update ...
	I0610 12:08:34.206396    4588 start.go:254] writing updated cluster config ...
	I0610 12:08:34.210462    4588 out.go:177] 
	I0610 12:08:34.211951    4588 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:08:34.219370    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:08:34.219370    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:08:34.230682    4588 out.go:177] * Starting "multinode-813300-m02" worker node in "multinode-813300" cluster
	I0610 12:08:34.232875    4588 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:08:34.232875    4588 cache.go:56] Caching tarball of preloaded images
	I0610 12:08:34.232875    4588 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:08:34.232875    4588 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:08:34.233735    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:08:34.236944    4588 start.go:360] acquireMachinesLock for multinode-813300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:08:34.236944    4588 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300-m02"
	I0610 12:08:34.237615    4588 start.go:93] Provisioning new machine with config: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:08:34.237615    4588 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0610 12:08:34.239702    4588 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 12:08:34.239702    4588 start.go:159] libmachine.API.Create for "multinode-813300" (driver="hyperv")
	I0610 12:08:34.240395    4588 client.go:168] LocalClient.Create starting
	I0610 12:08:34.240700    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0610 12:08:34.240700    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:08:34.241203    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:08:34.241370    4588 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0610 12:08:34.241548    4588 main.go:141] libmachine: Decoding PEM data...
	I0610 12:08:34.241548    4588 main.go:141] libmachine: Parsing certificate...
	I0610 12:08:34.241738    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0610 12:08:36.262319    4588 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0610 12:08:36.262385    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:36.262385    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0610 12:08:38.140816    4588 main.go:141] libmachine: [stdout =====>] : False
	
	I0610 12:08:38.141270    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:38.141270    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:08:39.735536    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:08:39.735536    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:39.735536    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:08:43.725162    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:08:43.725162    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:43.727495    4588 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:08:44.236510    4588 main.go:141] libmachine: Creating SSH key...
	I0610 12:08:44.388057    4588 main.go:141] libmachine: Creating VM...
	I0610 12:08:44.388057    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0610 12:08:47.561217    4588 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0610 12:08:47.561217    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:47.561217    4588 main.go:141] libmachine: Using switch "Default Switch"
	I0610 12:08:47.561217    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0610 12:08:49.510281    4588 main.go:141] libmachine: [stdout =====>] : True
	
	I0610 12:08:49.510430    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:49.510430    4588 main.go:141] libmachine: Creating VHD
	I0610 12:08:49.510430    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0610 12:08:53.452049    4588 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 20794A7E-9F85-4605-9CFB-9AB5A2243F5C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0610 12:08:53.452049    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:53.452049    4588 main.go:141] libmachine: Writing magic tar header
	I0610 12:08:53.452049    4588 main.go:141] libmachine: Writing SSH key tar header
	I0610 12:08:53.463808    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0610 12:08:56.776237    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:08:56.776237    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:56.776915    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\disk.vhd' -SizeBytes 20000MB
	I0610 12:08:59.460936    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:08:59.460999    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:08:59.460999    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-813300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0610 12:09:03.294382    4588 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-813300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0610 12:09:03.295386    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:03.295486    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-813300-m02 -DynamicMemoryEnabled $false
	I0610 12:09:05.730826    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:05.731605    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:05.731605    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-813300-m02 -Count 2
	I0610 12:09:08.091225    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:08.091225    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:08.091389    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-813300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\boot2docker.iso'
	I0610 12:09:10.917877    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:10.918532    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:10.918532    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-813300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\disk.vhd'
	I0610 12:09:13.890119    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:13.891006    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:13.891060    4588 main.go:141] libmachine: Starting VM...
	I0610 12:09:13.891060    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300-m02
	I0610 12:09:17.217967    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:17.217967    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:17.218129    4588 main.go:141] libmachine: Waiting for host to start...
	I0610 12:09:17.218287    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:19.673262    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:19.673262    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:19.673574    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:22.445782    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:22.445782    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:23.455957    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:25.876321    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:25.876909    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:25.876979    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:28.622723    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:28.622723    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:29.627749    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:32.027877    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:32.027952    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:32.027991    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:34.791963    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:34.791963    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:35.800230    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:38.203051    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:38.203636    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:38.203636    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:40.963011    4588 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:09:40.963011    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:41.973628    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:44.416582    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:44.416582    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:44.416582    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:47.254049    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:09:47.254049    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:47.254049    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:49.644892    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:49.644892    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:49.645559    4588 machine.go:94] provisionDockerMachine start ...
	I0610 12:09:49.645788    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:51.995513    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:51.995513    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:51.995513    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:54.722854    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:09:54.722854    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:54.729030    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:09:54.740222    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:09:54.741219    4588 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:09:54.870273    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:09:54.870349    4588 buildroot.go:166] provisioning hostname "multinode-813300-m02"
	I0610 12:09:54.870417    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:09:57.155923    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:09:57.156835    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:57.156835    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:09:59.869088    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:09:59.869870    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:09:59.876256    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:09:59.876256    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:09:59.876845    4588 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300-m02 && echo "multinode-813300-m02" | sudo tee /etc/hostname
	I0610 12:10:00.036418    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300-m02
	
	I0610 12:10:00.036539    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:02.352338    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:02.352338    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:02.352850    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:05.115922    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:05.116005    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:05.120761    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:05.121019    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:05.121019    4588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:10:05.266489    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:10:05.266489    4588 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:10:05.266489    4588 buildroot.go:174] setting up certificates
	I0610 12:10:05.266489    4588 provision.go:84] configureAuth start
	I0610 12:10:05.266489    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:07.629056    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:07.629289    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:07.629378    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:10.421266    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:10.422131    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:10.422131    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:12.788172    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:12.788347    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:12.788347    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:15.586195    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:15.586195    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:15.586847    4588 provision.go:143] copyHostCerts
	I0610 12:10:15.587004    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:10:15.587261    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:10:15.587261    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:10:15.587727    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:10:15.588865    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:10:15.589171    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:10:15.589171    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:10:15.589536    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:10:15.589840    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:10:15.590722    4588 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:10:15.590722    4588 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:10:15.591178    4588 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:10:15.592371    4588 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300-m02 san=[127.0.0.1 172.17.151.128 localhost minikube multinode-813300-m02]
	I0610 12:10:15.916216    4588 provision.go:177] copyRemoteCerts
	I0610 12:10:15.928750    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:10:15.928750    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:18.250037    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:18.250938    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:18.250996    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:20.970158    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:20.971086    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:20.971674    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:10:21.079420    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1499555s)
	I0610 12:10:21.079420    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:10:21.079775    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:10:21.131679    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:10:21.132137    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 12:10:21.184128    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:10:21.184257    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 12:10:21.239558    4588 provision.go:87] duration metric: took 15.9729376s to configureAuth
	I0610 12:10:21.239632    4588 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:10:21.240051    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:10:21.240051    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:23.584229    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:23.584229    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:23.584318    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:26.362007    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:26.362153    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:26.368272    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:26.369078    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:26.369078    4588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:10:26.500066    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:10:26.500204    4588 buildroot.go:70] root file system type: tmpfs
	I0610 12:10:26.500502    4588 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:10:26.500502    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:28.830472    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:28.830822    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:28.830822    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:31.638236    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:31.638722    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:31.645248    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:31.645248    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:31.645990    4588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.159.171"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:10:31.817981    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.159.171
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:10:31.817981    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:34.157297    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:34.157297    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:34.157297    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:36.961294    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:36.962039    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:36.967778    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:36.968315    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:36.968475    4588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:10:39.155315    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:10:39.155315    4588 machine.go:97] duration metric: took 49.5093501s to provisionDockerMachine
	I0610 12:10:39.155315    4588 client.go:171] duration metric: took 2m4.9138483s to LocalClient.Create
	I0610 12:10:39.155867    4588 start.go:167] duration metric: took 2m4.9151413s to libmachine.API.Create "multinode-813300"
	I0610 12:10:39.155867    4588 start.go:293] postStartSetup for "multinode-813300-m02" (driver="hyperv")
	I0610 12:10:39.155986    4588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:10:39.168428    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:10:39.168428    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:41.493819    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:41.493819    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:41.493819    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:44.301123    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:44.301123    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:44.301723    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:10:44.414294    4588 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2457575s)
	I0610 12:10:44.427480    4588 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:10:44.434767    4588 command_runner.go:130] > NAME=Buildroot
	I0610 12:10:44.434767    4588 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:10:44.434904    4588 command_runner.go:130] > ID=buildroot
	I0610 12:10:44.434904    4588 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:10:44.434904    4588 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:10:44.435037    4588 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:10:44.435068    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:10:44.435634    4588 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:10:44.437223    4588 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:10:44.437223    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:10:44.450343    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:10:44.472867    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:10:44.524171    4588 start.go:296] duration metric: took 5.3682595s for postStartSetup
	I0610 12:10:44.527309    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:46.868202    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:46.868202    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:46.868202    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:49.582486    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:49.582486    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:49.583022    4588 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:10:49.587441    4588 start.go:128] duration metric: took 2m15.3487158s to createHost
	I0610 12:10:49.587441    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:51.933279    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:51.933279    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:51.933844    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:54.672496    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:54.672834    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:54.677987    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:54.677987    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:54.678509    4588 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 12:10:54.806576    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718021454.812845033
	
	I0610 12:10:54.806642    4588 fix.go:216] guest clock: 1718021454.812845033
	I0610 12:10:54.806642    4588 fix.go:229] Guest: 2024-06-10 12:10:54.812845033 +0000 UTC Remote: 2024-06-10 12:10:49.587441 +0000 UTC m=+365.885567601 (delta=5.225404033s)
	I0610 12:10:54.806642    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:10:57.087646    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:10:57.087989    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:57.088094    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:10:59.860973    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:10:59.860973    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:10:59.866816    4588 main.go:141] libmachine: Using SSH client type: native
	I0610 12:10:59.866884    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.151.128 22 <nil> <nil>}
	I0610 12:10:59.866884    4588 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718021454
	I0610 12:11:00.015191    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:10:54 UTC 2024
	
	I0610 12:11:00.015191    4588 fix.go:236] clock set: Mon Jun 10 12:10:54 UTC 2024
	 (err=<nil>)
	I0610 12:11:00.015191    4588 start.go:83] releasing machines lock for "multinode-813300-m02", held for 2m25.7770525s
	I0610 12:11:00.015500    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:11:02.362997    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:02.362997    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:02.363073    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:05.203470    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:11:05.203551    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:05.208269    4588 out.go:177] * Found network options:
	I0610 12:11:05.211963    4588 out.go:177]   - NO_PROXY=172.17.159.171
	W0610 12:11:05.214531    4588 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:11:05.217146    4588 out.go:177]   - NO_PROXY=172.17.159.171
	W0610 12:11:05.219128    4588 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 12:11:05.221154    4588 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:11:05.223154    4588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:11:05.223154    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:11:05.233134    4588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:11:05.233134    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:11:07.621816    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:07.622648    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:07.622943    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:10.545475    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:11:10.545604    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:10.546196    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:11:10.557804    4588 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:11:10.557804    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:10.558804    4588 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:11:10.655498    4588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0610 12:11:10.780338    4588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:11:10.780338    4588 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5571395s)
	I0610 12:11:10.780338    4588 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.5471587s)
	W0610 12:11:10.780338    4588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:11:10.792576    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:11:10.825526    4588 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:11:10.825771    4588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:11:10.825771    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:11:10.825771    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:11:10.868331    4588 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:11:10.886782    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:11:10.926185    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:11:10.951492    4588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:11:10.964107    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:11:10.998277    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:11:11.036407    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:11:11.071765    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:11:11.112069    4588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:11:11.147207    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:11:11.180467    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:11:11.213384    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:11:11.244518    4588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:11:11.263227    4588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:11:11.274302    4588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:11:11.307150    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:11.524102    4588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:11:11.560382    4588 start.go:494] detecting cgroup driver to use...
	I0610 12:11:11.573859    4588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:11:11.598593    4588 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:11:11.598631    4588 command_runner.go:130] > [Unit]
	I0610 12:11:11.598631    4588 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:11:11.598668    4588 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:11:11.598668    4588 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:11:11.598668    4588 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:11:11.598668    4588 command_runner.go:130] > StartLimitBurst=3
	I0610 12:11:11.598668    4588 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:11:11.598727    4588 command_runner.go:130] > [Service]
	I0610 12:11:11.598727    4588 command_runner.go:130] > Type=notify
	I0610 12:11:11.598727    4588 command_runner.go:130] > Restart=on-failure
	I0610 12:11:11.598727    4588 command_runner.go:130] > Environment=NO_PROXY=172.17.159.171
	I0610 12:11:11.598727    4588 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:11:11.598727    4588 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:11:11.598863    4588 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:11:11.598863    4588 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:11:11.598863    4588 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:11:11.598863    4588 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:11:11.598863    4588 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:11:11.598963    4588 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:11:11.598963    4588 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:11:11.598963    4588 command_runner.go:130] > ExecStart=
	I0610 12:11:11.598963    4588 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:11:11.599028    4588 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:11:11.599028    4588 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:11:11.599028    4588 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:11:11.599028    4588 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:11:11.599028    4588 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:11:11.599028    4588 command_runner.go:130] > LimitCORE=infinity
	I0610 12:11:11.599028    4588 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:11:11.599028    4588 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:11:11.599140    4588 command_runner.go:130] > TasksMax=infinity
	I0610 12:11:11.599140    4588 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:11:11.599140    4588 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:11:11.599140    4588 command_runner.go:130] > Delegate=yes
	I0610 12:11:11.599140    4588 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:11:11.599140    4588 command_runner.go:130] > KillMode=process
	I0610 12:11:11.599140    4588 command_runner.go:130] > [Install]
	I0610 12:11:11.599140    4588 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:11:11.612843    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:11:11.652543    4588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:11:11.699581    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:11:11.738711    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:11:11.780078    4588 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:11:11.854242    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:11:11.887820    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:11:11.926828    4588 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:11:11.941661    4588 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:11:11.949084    4588 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:11:11.960762    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:11:11.987519    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:11:12.036700    4588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:11:12.255159    4588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:11:12.474321    4588 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:11:12.474461    4588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:11:12.521376    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:12.736988    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:11:15.281594    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5445856s)
	I0610 12:11:15.295747    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:11:15.337687    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:11:15.375551    4588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:11:15.617767    4588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:11:15.838434    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:16.049989    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:11:16.095406    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:11:16.132342    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:16.337717    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:11:16.465652    4588 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:11:16.479852    4588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:11:16.489205    4588 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:11:16.489286    4588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:11:16.489318    4588 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0610 12:11:16.489345    4588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:11:16.489345    4588 command_runner.go:130] > Access: 2024-06-10 12:11:16.374337285 +0000
	I0610 12:11:16.489394    4588 command_runner.go:130] > Modify: 2024-06-10 12:11:16.374337285 +0000
	I0610 12:11:16.489394    4588 command_runner.go:130] > Change: 2024-06-10 12:11:16.377337327 +0000
	I0610 12:11:16.489428    4588 command_runner.go:130] >  Birth: -
	I0610 12:11:16.489428    4588 start.go:562] Will wait 60s for crictl version
	I0610 12:11:16.501661    4588 ssh_runner.go:195] Run: which crictl
	I0610 12:11:16.508650    4588 command_runner.go:130] > /usr/bin/crictl
	I0610 12:11:16.522045    4588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:11:16.577734    4588 command_runner.go:130] > Version:  0.1.0
	I0610 12:11:16.577734    4588 command_runner.go:130] > RuntimeName:  docker
	I0610 12:11:16.577734    4588 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:11:16.577734    4588 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:11:16.577867    4588 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:11:16.586649    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:11:16.627174    4588 command_runner.go:130] > 26.1.4
	I0610 12:11:16.637565    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:11:16.672485    4588 command_runner.go:130] > 26.1.4
	I0610 12:11:16.677357    4588 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:11:16.680604    4588 out.go:177]   - env NO_PROXY=172.17.159.171
	I0610 12:11:16.682631    4588 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:11:16.687146    4588 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:11:16.690150    4588 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:11:16.690150    4588 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:11:16.703778    4588 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:11:16.711418    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:11:16.733435    4588 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:11:16.734138    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:11:16.734810    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:11:19.011757    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:19.012790    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:19.012790    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:11:19.013573    4588 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.151.128
	I0610 12:11:19.013573    4588 certs.go:194] generating shared ca certs ...
	I0610 12:11:19.013573    4588 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:11:19.013917    4588 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:11:19.014532    4588 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:11:19.014800    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:11:19.015170    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:11:19.015290    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:11:19.015688    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:11:19.016370    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:11:19.016618    4588 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:11:19.016812    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:11:19.017069    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:11:19.017245    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:11:19.017624    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:11:19.017944    4588 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:11:19.017944    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.018393    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.018580    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.018708    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:11:19.074850    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:11:19.123648    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:11:19.175920    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:11:19.221951    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:11:19.276690    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:11:19.328081    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:11:19.391788    4588 ssh_runner.go:195] Run: openssl version
	I0610 12:11:19.402568    4588 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:11:19.420480    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:11:19.454097    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.461999    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.461999    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.475323    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:11:19.489426    4588 command_runner.go:130] > b5213941
	I0610 12:11:19.501484    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:11:19.534058    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:11:19.566004    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.572892    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.573207    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.584393    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:11:19.594218    4588 command_runner.go:130] > 51391683
	I0610 12:11:19.608435    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:11:19.641477    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:11:19.673326    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.680330    4588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.680882    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.692878    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:11:19.704044    4588 command_runner.go:130] > 3ec20f2e
	I0610 12:11:19.714906    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:11:19.746683    4588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:11:19.753164    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:11:19.753835    4588 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:11:19.753979    4588 kubeadm.go:928] updating node {m02 172.17.151.128 8443 v1.30.1 docker false true} ...
	I0610 12:11:19.753979    4588 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.151.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:11:19.766808    4588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:11:19.786670    4588 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0610 12:11:19.786670    4588 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 12:11:19.799248    4588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 12:11:19.819418    4588 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0610 12:11:19.819418    4588 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0610 12:11:19.820008    4588 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 12:11:19.820008    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 12:11:19.820186    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 12:11:19.837476    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:11:19.838584    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 12:11:19.841021    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 12:11:19.860269    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 12:11:19.860269    4588 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 12:11:19.860899    4588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 12:11:19.860899    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 12:11:19.861094    4588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 12:11:19.861094    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 12:11:19.861150    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 12:11:19.875476    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 12:11:19.927216    4588 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 12:11:19.928269    4588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 12:11:19.928622    4588 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 12:11:21.395244    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0610 12:11:21.414600    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0610 12:11:21.454103    4588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:11:21.515630    4588 ssh_runner.go:195] Run: grep 172.17.159.171	control-plane.minikube.internal$ /etc/hosts
	I0610 12:11:21.522801    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.159.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:11:21.563217    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:21.775475    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:11:21.807974    4588 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:11:21.808784    4588 start.go:316] joinCluster: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:11:21.808980    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 12:11:21.809040    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:11:24.214569    4588 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:11:24.214569    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:24.215479    4588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:11:26.984919    4588 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:11:26.984919    4588 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:11:26.985727    4588 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:11:27.193620    4588 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gf7439.c4abko5fnf4w17n8 --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 
	I0610 12:11:27.193620    4588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3845966s)
	I0610 12:11:27.193620    4588 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:11:27.193620    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gf7439.c4abko5fnf4w17n8 --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-813300-m02"
	I0610 12:11:27.412803    4588 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:11:29.260064    4588 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 12:11:29.260064    4588 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0610 12:11:29.260064    4588 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0610 12:11:29.260064    4588 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:11:29.260064    4588 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.502015791s
	I0610 12:11:29.260185    4588 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0610 12:11:29.260185    4588 command_runner.go:130] > This node has joined the cluster:
	I0610 12:11:29.260185    4588 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0610 12:11:29.260185    4588 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0610 12:11:29.260185    4588 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0610 12:11:29.260185    4588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gf7439.c4abko5fnf4w17n8 --discovery-token-ca-cert-hash sha256:08d7b79c676c5b99bca00683b8beb16b9b98e40bfd6ec47ca73824a2eb6738f2 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-813300-m02": (2.0665485s)
	I0610 12:11:29.260308    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 12:11:29.477872    4588 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0610 12:11:29.694891    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-813300-m02 minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=multinode-813300 minikube.k8s.io/primary=false
	I0610 12:11:29.850112    4588 command_runner.go:130] > node/multinode-813300-m02 labeled
	I0610 12:11:29.850212    4588 start.go:318] duration metric: took 8.0413623s to joinCluster
	I0610 12:11:29.850367    4588 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:11:29.855200    4588 out.go:177] * Verifying Kubernetes components...
	I0610 12:11:29.851036    4588 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:11:29.872060    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:11:30.101494    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:11:30.133140    4588 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:11:30.133905    4588 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.159.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:11:30.134653    4588 node_ready.go:35] waiting up to 6m0s for node "multinode-813300-m02" to be "Ready" ...
	I0610 12:11:30.135218    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:30.135218    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:30.135218    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:30.135218    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:30.154207    4588 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0610 12:11:30.154300    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:30 GMT
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Audit-Id: 120211c2-3f44-4da6-84af-a42103a0ca12
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:30.154300    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:30.154300    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:30.154300    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:30.154462    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:30.640539    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:30.640539    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:30.640539    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:30.640539    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:30.648978    4588 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:11:30.648978    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:30.648978    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:30.648978    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:30 GMT
	I0610 12:11:30.648978    4588 round_trippers.go:580]     Audit-Id: b18c775d-77ef-4caa-914c-7283fd55f1aa
	I0610 12:11:30.648978    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:31.145201    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:31.145282    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:31.145282    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:31.145282    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:31.152903    4588 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:11:31.152903    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:31.152903    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:31.152903    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:31 GMT
	I0610 12:11:31.152903    4588 round_trippers.go:580]     Audit-Id: 53a17888-1a8e-4851-8815-1bc758b4e0d1
	I0610 12:11:31.153005    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:31.153005    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:31.153005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:31.153005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:31.153133    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:31.642808    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:31.642895    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:31.642895    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:31.642895    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:31.646234    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:31.647170    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:31.647238    4588 round_trippers.go:580]     Audit-Id: 2c94ef73-ffa9-41c2-9f48-2d1eda7b40b0
	I0610 12:11:31.647238    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:31.647258    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:31.647258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:31.647258    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:31.647258    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:31.647258    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:31 GMT
	I0610 12:11:31.647389    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:32.146589    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:32.146654    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:32.146654    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:32.146654    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:32.151245    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:32.151473    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:32.151473    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:32.151473    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:32 GMT
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Audit-Id: 02a02b92-b406-46fa-a89f-f11d3aa78b57
	I0610 12:11:32.151473    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:32.151619    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:32.152091    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:32.647908    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:32.647908    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:32.647908    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:32.647908    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:32.655278    4588 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:11:32.656309    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:32.656309    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:32.656309    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:32.656309    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:32.656381    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:32.656381    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:32.656381    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:32 GMT
	I0610 12:11:32.656381    4588 round_trippers.go:580]     Audit-Id: 8be91a38-9480-4dc6-bb32-e813479247b1
	I0610 12:11:32.656509    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:33.136161    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:33.136161    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:33.136161    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:33.136370    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:33.140480    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:33.140480    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:33.140480    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:33 GMT
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Audit-Id: 829ec5bb-9a54-441f-9a33-3fac4f603fda
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:33.140595    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:33.140595    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:33.140677    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:33.649302    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:33.649302    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:33.649302    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:33.649302    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:33.653244    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:33.653244    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:33 GMT
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Audit-Id: f5522161-62a8-4be2-b191-8cee428580bd
	I0610 12:11:33.653244    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:33.653782    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:33.653782    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:33.653862    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:33.653862    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:34.140515    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:34.140774    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:34.140774    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:34.140774    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:34.144741    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:34.144836    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:34.144836    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:34.144917    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:34 GMT
	I0610 12:11:34.144998    4588 round_trippers.go:580]     Audit-Id: ffbf68f4-fcd8-46dd-aeb6-1bbbbe2cb644
	I0610 12:11:34.144998    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:34.144998    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:34.145028    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:34.145028    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:34.145028    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:34.641306    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:34.641355    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:34.641355    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:34.641395    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:34.648180    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:11:34.649068    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:34.649068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:34.649068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:34.649068    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:34.649068    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:34 GMT
	I0610 12:11:34.649158    4588 round_trippers.go:580]     Audit-Id: 3161a238-0ca8-4ad9-b851-e3ba727a1005
	I0610 12:11:34.649158    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:34.649158    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:34.649480    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:34.649960    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:35.141434    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:35.141434    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:35.141434    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:35.141544    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:35.144794    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:35.145459    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Audit-Id: 8bfb5db6-acd9-419a-a15c-52a9cae18cf4
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:35.145459    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:35.145459    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:35.145459    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:35 GMT
	I0610 12:11:35.145647    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:35.649334    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:35.649334    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:35.649334    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:35.649334    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:35.654625    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:11:35.654625    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:35.654625    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:35.654625    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:35.654625    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:35.654625    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:35.654719    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:35 GMT
	I0610 12:11:35.654719    4588 round_trippers.go:580]     Audit-Id: 36583692-c8d0-4e9c-9ce6-c1c822dd5fa2
	I0610 12:11:35.654719    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:35.654755    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:36.140102    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:36.140102    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:36.140102    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:36.140102    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:36.143717    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:36.143988    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:36.143988    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:36.143988    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:36.143988    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:36 GMT
	I0610 12:11:36.143988    4588 round_trippers.go:580]     Audit-Id: 677c1be2-6b1f-4364-9375-811a12bc2d54
	I0610 12:11:36.144073    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:36.144073    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:36.144073    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:36.144299    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:36.647892    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:36.647892    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:36.647960    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:36.647960    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:36.652449    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:36.652449    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:36.653266    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:36.653266    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:36.653266    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:36.653266    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:36 GMT
	I0610 12:11:36.653367    4588 round_trippers.go:580]     Audit-Id: 0775cb60-f275-466b-beb7-fbd374a788eb
	I0610 12:11:36.653367    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:36.653367    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:36.653528    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:36.654008    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:37.140931    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:37.140931    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:37.140931    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:37.140931    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:37.145903    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:37.145903    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:37.145992    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:37.145992    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:37.145992    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:37 GMT
	I0610 12:11:37.146069    4588 round_trippers.go:580]     Audit-Id: 0ae2f0c6-2a9b-45d0-a1d0-d6e366a1cda3
	I0610 12:11:37.146134    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:37.649232    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:37.649232    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:37.649232    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:37.649232    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:37.654247    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:11:37.654537    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:37.654537    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:37.654537    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:37 GMT
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Audit-Id: 6692a4c9-18ea-498b-9bac-d8956738e490
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:37.654537    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:37.654750    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:38.140018    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:38.140097    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:38.140097    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:38.140097    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:38.143731    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:38.144482    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:38.144482    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:38.144482    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:38.144482    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:38.144569    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:38.144569    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:38.144569    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:38 GMT
	I0610 12:11:38.144569    4588 round_trippers.go:580]     Audit-Id: 8d344def-2d40-4c03-9670-8ae9d6a107b8
	I0610 12:11:38.144569    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:38.645605    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:38.645605    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:38.645605    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:38.645605    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:38.650198    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:38.650198    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:38.650198    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:38.650198    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:38.650198    4588 round_trippers.go:580]     Content-Length: 4030
	I0610 12:11:38.650198    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:38 GMT
	I0610 12:11:38.650372    4588 round_trippers.go:580]     Audit-Id: 262d504f-c6bd-4fe3-8221-cde83d48b444
	I0610 12:11:38.650372    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:38.650372    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:38.650598    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"603","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0610 12:11:39.145556    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:39.145556    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:39.145556    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:39.145556    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:39.150540    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:39.151438    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:39.151438    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:39.151438    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:39 GMT
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Audit-Id: 97195732-aef3-4a63-8e27-d623b638c932
	I0610 12:11:39.151438    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:39.152316    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:39.152904    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:39.646188    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:39.646188    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:39.646188    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:39.646188    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:39.650273    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:39.650347    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:39.650347    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:39.650347    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:39 GMT
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Audit-Id: 63a802e5-f779-4df4-95b0-69698f33f890
	I0610 12:11:39.650347    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:39.650611    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:40.135464    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:40.135464    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:40.135464    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:40.135464    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:40.139465    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:40.139465    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:40 GMT
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Audit-Id: 3005cad6-5eb1-4e80-9df6-7f76602ade8f
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:40.140037    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:40.140037    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:40.140037    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:40.140181    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:40.647037    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:40.647242    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:40.647242    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:40.647242    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:40.652362    4588 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:11:40.652362    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:40.652362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:40.652362    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:40 GMT
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Audit-Id: b703d6a1-f080-4fd1-a944-38afee287a18
	I0610 12:11:40.652362    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:40.652965    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:41.137147    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:41.137147    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:41.137147    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:41.137147    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:41.141766    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:41.141766    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:41.141766    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:41.141766    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:41 GMT
	I0610 12:11:41.141766    4588 round_trippers.go:580]     Audit-Id: 31939970-7805-4a89-9e76-a7fad299f03e
	I0610 12:11:41.142164    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:41.142164    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:41.142164    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:41.142304    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:41.644436    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:41.644493    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:41.644493    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:41.644493    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:41.648780    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:41.648780    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:41.648780    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:41.648780    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:41.648780    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:41.648780    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:41 GMT
	I0610 12:11:41.648780    4588 round_trippers.go:580]     Audit-Id: 0825d248-901f-4c1d-810e-5285b2152eed
	I0610 12:11:41.649725    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:41.649994    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:41.650452    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:42.136785    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:42.136785    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:42.136785    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:42.136785    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:42.140392    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:42.140392    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:42.140392    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:42.140392    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:42 GMT
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Audit-Id: 11b80fc3-7764-4796-b629-31a53e9d8efe
	I0610 12:11:42.140392    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:42.141123    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:42.646819    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:42.646819    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:42.646819    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:42.646819    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:42.651676    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:42.651676    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Audit-Id: 05161fa0-65a0-4dfa-9fce-c6366744f573
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:42.651676    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:42.651676    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:42.651676    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:42 GMT
	I0610 12:11:42.652003    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:43.140233    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:43.140503    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:43.140503    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:43.140589    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:43.143984    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:43.143984    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:43.143984    4588 round_trippers.go:580]     Audit-Id: 048190cf-d8d4-4e7c-ad65-ba33997dd557
	I0610 12:11:43.144542    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:43.144542    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:43.144542    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:43.144542    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:43.144542    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:43 GMT
	I0610 12:11:43.144821    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:43.646980    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:43.646980    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:43.647093    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:43.647093    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:43.649867    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:43.650767    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:43.650767    4588 round_trippers.go:580]     Audit-Id: debea53e-3d89-46ce-9861-43438e7ef3fb
	I0610 12:11:43.650903    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:43.650903    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:43.650903    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:43.650903    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:43.650903    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:43 GMT
	I0610 12:11:43.650903    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:43.650903    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:44.141683    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:44.141759    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:44.141759    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:44.141759    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:44.146005    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:44.146005    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Audit-Id: fd0b413f-d703-4826-88f7-f92b964e7225
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:44.146005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:44.146005    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:44.146005    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:44 GMT
	I0610 12:11:44.146005    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:44.648434    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:44.648568    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:44.648568    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:44.648568    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:45.026766    4588 round_trippers.go:574] Response Status: 200 OK in 378 milliseconds
	I0610 12:11:45.026888    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Audit-Id: 2ffab90b-53ae-414a-a7af-dc244c1a0d38
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:45.026939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:45.026939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:45.026939    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:45 GMT
	I0610 12:11:45.026939    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:45.150155    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:45.150155    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:45.150155    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:45.150155    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:45.154085    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:45.154085    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:45.154085    4588 round_trippers.go:580]     Audit-Id: 96ef9dbe-5664-4716-9850-3761e6347748
	I0610 12:11:45.154150    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:45.154150    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:45.154150    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:45.154150    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:45.154150    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:45 GMT
	I0610 12:11:45.154663    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:45.640479    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:45.640479    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:45.640479    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:45.640479    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:45.644051    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:45.644886    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:45.644886    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:45.644886    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:45 GMT
	I0610 12:11:45.644990    4588 round_trippers.go:580]     Audit-Id: 59a50b84-480f-4407-866c-91f7a741c38f
	I0610 12:11:45.645063    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:45.645140    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:45.645229    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:45.645297    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:46.144014    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:46.144073    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:46.144073    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:46.144073    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:46.147638    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:46.147638    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:46.147638    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:46.147638    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:46.147638    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:46 GMT
	I0610 12:11:46.147638    4588 round_trippers.go:580]     Audit-Id: ae143dec-a170-46f3-8120-7d6e3e03234a
	I0610 12:11:46.148117    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:46.148117    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:46.148620    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:46.148620    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:46.640820    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:46.640989    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:46.640989    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:46.641063    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:46.645172    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:46.645213    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Audit-Id: 847d5b54-5db6-4652-9704-c8c39063334c
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:46.645213    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:46.645213    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:46.645213    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:46 GMT
	I0610 12:11:46.645213    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:47.141987    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:47.141987    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:47.141987    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:47.141987    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:47.145594    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:47.145594    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:47.145996    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:47.145996    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:47 GMT
	I0610 12:11:47.145996    4588 round_trippers.go:580]     Audit-Id: 51c3741a-3779-4687-9675-ec8b78395d73
	I0610 12:11:47.146242    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:47.639611    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:47.639688    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:47.639688    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:47.639688    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:47.643746    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:47.643746    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:47.643746    4588 round_trippers.go:580]     Audit-Id: e150216b-0242-4c48-ba26-ceed233c4e9e
	I0610 12:11:47.643746    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:47.643877    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:47.643877    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:47.643877    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:47.643877    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:47 GMT
	I0610 12:11:47.644149    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:48.138285    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:48.138501    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:48.138501    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:48.138501    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:48.142963    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:48.142963    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:48.143652    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:48.143652    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:48 GMT
	I0610 12:11:48.143652    4588 round_trippers.go:580]     Audit-Id: 1b862a76-f4a3-4be6-a4f2-bf278ed88005
	I0610 12:11:48.143747    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:48.650829    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:48.650909    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:48.650909    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:48.650909    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:48.660633    4588 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 12:11:48.660899    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Audit-Id: 17e5626d-5a6a-46d3-bc16-7e7057afeec3
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:48.660899    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:48.660899    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:48.660899    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:48 GMT
	I0610 12:11:48.661433    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:48.661959    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:49.136114    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:49.136114    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:49.136114    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:49.136114    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:49.140691    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:49.140691    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Audit-Id: 0c770e35-ded7-43e1-876e-cb07a38fd2ec
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:49.141710    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:49.141710    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:49.141710    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:49 GMT
	I0610 12:11:49.141900    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:49.649392    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:49.649667    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:49.649722    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:49.649722    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:49.656181    4588 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:11:49.656181    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:49.656181    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:49.656181    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:49 GMT
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Audit-Id: f82a420f-5dd7-47d8-950d-49e3d39c7c47
	I0610 12:11:49.656181    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:49.656719    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:50.150676    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:50.150676    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:50.150676    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:50.150676    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:50.155265    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:50.155265    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:50.155265    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:50.155265    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:50 GMT
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Audit-Id: fef3067f-7dbf-4d79-bc69-c0238a7f6f1e
	I0610 12:11:50.155265    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:50.155735    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:50.649159    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:50.649159    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:50.649159    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:50.649159    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:50.653519    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:50.653519    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:50.653519    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:50.653519    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:50 GMT
	I0610 12:11:50.653519    4588 round_trippers.go:580]     Audit-Id: 8688b0cf-3044-4665-8f85-fc7d50db907c
	I0610 12:11:50.653519    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:51.149572    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:51.149572    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.149572    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.149572    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.154215    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:51.154479    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.154479    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.154479    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Audit-Id: 212364d7-a337-45b2-9ccb-42587fa16fbd
	I0610 12:11:51.154479    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.154574    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"615","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0610 12:11:51.154574    4588 node_ready.go:53] node "multinode-813300-m02" has status "Ready":"False"
	I0610 12:11:51.636574    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:51.636574    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.636574    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.636574    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.648795    4588 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0610 12:11:51.648795    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.648795    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.648795    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.648795    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.648874    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.648874    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.648874    4588 round_trippers.go:580]     Audit-Id: 15cb6306-cb2e-42c9-90f9-f0ea78aa907e
	I0610 12:11:51.649046    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"640","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0610 12:11:51.649843    4588 node_ready.go:49] node "multinode-813300-m02" has status "Ready":"True"
	I0610 12:11:51.649913    4588 node_ready.go:38] duration metric: took 21.5150861s for node "multinode-813300-m02" to be "Ready" ...
	I0610 12:11:51.649913    4588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:11:51.649984    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods
	I0610 12:11:51.649984    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.649984    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.649984    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.658205    4588 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:11:51.658205    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.658205    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.658205    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Audit-Id: 4892c8a9-dc91-4772-83d2-aaf257434292
	I0610 12:11:51.658205    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.659421    4588 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"640"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0610 12:11:51.663308    4588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.663308    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:11:51.663308    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.663308    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.663308    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.666480    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:51.666717    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.666717    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.666717    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Audit-Id: 29e5482f-5681-47f7-833b-ea8a2eaca847
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.666717    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.666984    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"427","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0610 12:11:51.667673    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.667673    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.667673    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.667732    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.669455    4588 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 12:11:51.669455    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.669455    4588 round_trippers.go:580]     Audit-Id: bc194cc6-fd6f-420a-89b0-01f8d0a70bfd
	I0610 12:11:51.669455    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.670408    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.670408    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.670408    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.670408    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.670809    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.671358    4588 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.671358    4588 pod_ready.go:81] duration metric: took 8.0504ms for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.671358    4588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.671495    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:11:51.671592    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.671592    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.671657    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.673658    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.673658    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.673658    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.673658    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Audit-Id: 7b458228-14ae-4077-b82e-2cbe339be6a6
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.673658    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.674781    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"e48af956-8533-4b8e-be5d-0834484cbffa","resourceVersion":"385","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.159.171:2379","kubernetes.io/config.hash":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.mirror":"baa7bd9cfb361baaed8d7d5729a6c77c","kubernetes.io/config.seen":"2024-06-10T12:08:00.781973961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0610 12:11:51.674781    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.675319    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.675319    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.675319    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.678378    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.678579    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.678579    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Audit-Id: 67628109-d0cf-4546-acc6-77a9b7f24051
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.678579    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.678579    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.678984    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.678984    4588 pod_ready.go:92] pod "etcd-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.678984    4588 pod_ready.go:81] duration metric: took 7.6256ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.678984    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.678984    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:11:51.678984    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.679522    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.679522    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.681723    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.681723    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.682457    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Audit-Id: 006b6c27-a6c2-4581-9d6d-b3591452ff62
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.682457    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.682457    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.682703    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"f824b391-b3d2-49ec-ba7d-863cb2150f81","resourceVersion":"386","creationTimestamp":"2024-06-10T12:07:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.159.171:8443","kubernetes.io/config.hash":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.mirror":"93f80d01e953cc664fc05c397fdad000","kubernetes.io/config.seen":"2024-06-10T12:07:52.425003820Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:07:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0610 12:11:51.682824    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.682824    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.682824    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.682824    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.686165    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.686165    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Audit-Id: 1a7c9c37-ae20-4df4-9b97-f0c2a3dbc6bd
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.686272    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.686272    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.686272    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.686558    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.687382    4588 pod_ready.go:92] pod "kube-apiserver-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.687439    4588 pod_ready.go:81] duration metric: took 8.4554ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.687516    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.687601    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:11:51.687601    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.687601    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.687601    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.690594    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.691080    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Audit-Id: 99614bca-e7d3-4d5a-bcd7-a928cb9b154e
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.691080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.691080    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.691080    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.691464    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"384","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0610 12:11:51.692144    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:51.692144    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.692144    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.692144    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.694634    4588 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:11:51.694634    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.694634    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.694634    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.694984    4588 round_trippers.go:580]     Audit-Id: 32d4392b-f53e-46ab-be25-56be6d4cbf25
	I0610 12:11:51.694984    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.695078    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.695101    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.695358    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:51.695860    4588 pod_ready.go:92] pod "kube-controller-manager-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:51.695917    4588 pod_ready.go:81] duration metric: took 8.4006ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.695964    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:51.839454    4588 request.go:629] Waited for 143.1953ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:11:51.839923    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:11:51.839923    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:51.839923    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:51.839923    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:51.843515    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:51.843814    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:51.843814    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:51.843884    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:51.843884    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:51.843921    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:51 GMT
	I0610 12:11:51.843921    4588 round_trippers.go:580]     Audit-Id: ae52edfd-adbd-41e2-9903-60b4ca215d9e
	I0610 12:11:51.843921    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:51.843921    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"380","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0610 12:11:52.037284    4588 request.go:629] Waited for 192.0358ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.037410    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.037470    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.037470    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.037470    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.041986    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:52.041986    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.041986    4588 round_trippers.go:580]     Audit-Id: 6f58beea-d4d9-4031-a26a-f0800096bfaa
	I0610 12:11:52.043065    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.043065    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.043065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.043065    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.043065    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.043433    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:52.044120    4588 pod_ready.go:92] pod "kube-proxy-nrpvt" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:52.044181    4588 pod_ready.go:81] duration metric: took 348.2135ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.044181    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.249108    4588 request.go:629] Waited for 204.4773ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:11:52.249396    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:11:52.249396    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.249396    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.249396    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.253114    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:52.254189    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Audit-Id: 22ba6e39-243b-40db-98c8-3e627dba7115
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.254189    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.254189    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.254189    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.254310    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rx2b2","generateName":"kube-proxy-","namespace":"kube-system","uid":"ce59a99b-a561-4598-9399-147f748433a2","resourceVersion":"622","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0610 12:11:52.451902    4588 request.go:629] Waited for 196.8687ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:52.452172    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:11:52.452172    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.452227    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.452227    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.456977    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:52.456977    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.456977    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.457882    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.457882    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.457882    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.457882    4588 round_trippers.go:580]     Audit-Id: 952f9251-dd4e-4d64-989c-68606172a0ae
	I0610 12:11:52.457882    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.458487    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"640","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0610 12:11:52.458526    4588 pod_ready.go:92] pod "kube-proxy-rx2b2" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:52.458526    4588 pod_ready.go:81] duration metric: took 414.2651ms for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.458526    4588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.638866    4588 request.go:629] Waited for 180.175ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:11:52.639129    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:11:52.639129    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.639129    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.639129    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.642844    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:52.642844    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Audit-Id: 812d93e6-be52-4acc-b0ac-ecbab159315b
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.642844    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.642844    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.642844    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.643940    4588 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"387","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0610 12:11:52.842848    4588 request.go:629] Waited for 197.3782ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.843029    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes/multinode-813300
	I0610 12:11:52.843029    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:52.843029    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:52.843029    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:52.846380    4588 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:11:52.846380    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:52.847068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:52 GMT
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Audit-Id: 4d4f8b3e-cb53-4801-94ee-6aeaebe31fb6
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:52.847068    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:52.847068    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:52.847544    4588 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0610 12:11:52.848051    4588 pod_ready.go:92] pod "kube-scheduler-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:52.848114    4588 pod_ready.go:81] duration metric: took 389.5849ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:52.848114    4588 pod_ready.go:38] duration metric: took 1.1981912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:11:52.848184    4588 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:11:52.860356    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:11:52.887428    4588 system_svc.go:56] duration metric: took 38.3195ms WaitForService to wait for kubelet
	I0610 12:11:52.887428    4588 kubeadm.go:576] duration metric: took 23.0368067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:11:52.887492    4588 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:11:53.045346    4588 request.go:629] Waited for 157.5222ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.159.171:8443/api/v1/nodes
	I0610 12:11:53.045433    4588 round_trippers.go:463] GET https://172.17.159.171:8443/api/v1/nodes
	I0610 12:11:53.045433    4588 round_trippers.go:469] Request Headers:
	I0610 12:11:53.045527    4588 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:11:53.045527    4588 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:11:53.049939    4588 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:11:53.049939    4588 round_trippers.go:577] Response Headers:
	I0610 12:11:53.049939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:11:53.049939    4588 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:11:53 GMT
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Audit-Id: f303c0c3-82b7-4c72-b12a-228fca786f50
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:11:53.049939    4588 round_trippers.go:580]     Content-Type: application/json
	I0610 12:11:53.051319    4588 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"642"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"415","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9269 chars]
	I0610 12:11:53.051858    4588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:11:53.052041    4588 node_conditions.go:123] node cpu capacity is 2
	I0610 12:11:53.052041    4588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:11:53.052041    4588 node_conditions.go:123] node cpu capacity is 2
	I0610 12:11:53.052041    4588 node_conditions.go:105] duration metric: took 164.5477ms to run NodePressure ...
	I0610 12:11:53.052127    4588 start.go:240] waiting for startup goroutines ...
	I0610 12:11:53.052168    4588 start.go:254] writing updated cluster config ...
	I0610 12:11:53.067074    4588 ssh_runner.go:195] Run: rm -f paused
	I0610 12:11:53.212519    4588 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 12:11:53.217393    4588 out.go:177] * Done! kubectl is now configured to use "multinode-813300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.123513267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235169134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235268934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235298134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.235560636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:08:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 12:08:31 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:08:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a0bc6043f7b92f091f4ceee7db3e11617072391c6e5303f4ecdafdb06d4b585a/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.730390719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.730618620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.730710821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.732556631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.765650908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.765730109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.765799609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:08:31 multinode-813300 dockerd[1330]: time="2024-06-10T12:08:31.766004410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.303731826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.304019627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.304037527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:20 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:20.304223128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:20 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:12:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 12:12:21 multinode-813300 cri-dockerd[1231]: time="2024-06-10T12:12:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.074732018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.076936421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.077116521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:12:22 multinode-813300 dockerd[1330]: time="2024-06-10T12:12:22.077673422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   6 minutes ago       Running             busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	f2e39052db195       cbb01a7bd410d                                                                                         10 minutes ago      Running             coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	d32ce22e31b06       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       0                   a0bc6043f7b92       storage-provisioner
	c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              10 minutes ago      Running             kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	afad8b05897e5       747097150317f                                                                                         10 minutes ago      Running             kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	bd1a6cd987430       a52dc94f0a912                                                                                         11 minutes ago      Running             kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	f1409bf44ff14       25a1387cdab82                                                                                         11 minutes ago      Running             kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	34b9299d74e38       3861cfcd7c04c                                                                                         11 minutes ago      Running             etcd                      0                   a10e49596de5e       etcd-multinode-813300
	ba52603f83875       91be940803172                                                                                         11 minutes ago      Running             kube-apiserver            0                   c7d28a97ba1c4       kube-apiserver-multinode-813300
	
	
	==> coredns [f2e39052db19] <==
	[INFO] 10.244.1.2:46174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001048s
	[INFO] 10.244.0.3:52212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003513s
	[INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	[INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	[INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	[INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	[INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	[INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	[INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	[INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	[INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	[INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	[INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	[INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	[INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	[INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	[INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	[INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	[INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	[INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	[INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	[INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	[INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	[INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	[INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	
	
	==> describe nodes <==
	Name:               multinode-813300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-813300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-813300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-813300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:19:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:17:42 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:17:42 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:17:42 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:17:42 +0000   Mon, 10 Jun 2024 12:08:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.159.171
	  Hostname:    multinode-813300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 04dc333273774adc9b2cebbeee4c799a
	  System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	  Boot ID:                    c2d6ffa5-8803-4682-946d-e778abe2b7af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-z28tq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-813300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-29gbv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-813300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-813300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-nrpvt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-813300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m   node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	  Normal  NodeReady                10m   kubelet          Node multinode-813300 status is now: NodeReady
	
	
	Name:               multinode-813300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-813300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-813300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-813300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:18:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:17:36 +0000   Mon, 10 Jun 2024 12:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:17:36 +0000   Mon, 10 Jun 2024 12:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:17:36 +0000   Mon, 10 Jun 2024 12:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:17:36 +0000   Mon, 10 Jun 2024 12:11:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.151.128
	  Hostname:    multinode-813300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	  System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	  Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-czxmt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 kindnet-r4nfq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m40s
	  kube-system                 kube-proxy-rx2b2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m40s (x2 over 7m40s)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s (x2 over 7m40s)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s (x2 over 7m40s)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m39s                  node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	  Normal  NodeReady                7m17s                  kubelet          Node multinode-813300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.208733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun10 12:06] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.196226] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Jun10 12:07] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.123164] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.597831] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.216475] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.252946] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +2.841084] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.239357] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.201793] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.312951] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[ +11.774213] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.120592] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.210672] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +6.442980] systemd-fstab-generator[1714]: Ignoring "noauto" option for root device
	[  +0.108322] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.582828] systemd-fstab-generator[2127]: Ignoring "noauto" option for root device
	[Jun10 12:08] kauditd_printk_skb: 62 callbacks suppressed
	[ +15.292472] systemd-fstab-generator[2331]: Ignoring "noauto" option for root device
	[  +0.227353] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.603365] kauditd_printk_skb: 51 callbacks suppressed
	[Jun10 12:12] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [34b9299d74e3] <==
	{"level":"info","ts":"2024-06-10T12:07:55.14921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.149221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 2"}
	{"level":"info","ts":"2024-06-10T12:07:55.156121Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.159.171:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T12:07:55.159001Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.159829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T12:07:55.160871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T12:07:55.163364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T12:07:55.165819Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T12:07:55.166021Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.166252Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.166441Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:07:55.168652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.159.171:2379"}
	{"level":"info","ts":"2024-06-10T12:07:55.184009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T12:07:57.986982Z","caller":"traceutil/trace.go:171","msg":"trace[314319298] transaction","detail":"{read_only:false; response_revision:57; number_of_response:1; }","duration":"175.967496ms","start":"2024-06-10T12:07:57.811Z","end":"2024-06-10T12:07:57.986968Z","steps":["trace[314319298] 'process raft request'  (duration: 175.915395ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:07:57.985692Z","caller":"traceutil/trace.go:171","msg":"trace[688595595] transaction","detail":"{read_only:false; response_revision:56; number_of_response:1; }","duration":"176.678005ms","start":"2024-06-10T12:07:57.808997Z","end":"2024-06-10T12:07:57.985675Z","steps":["trace[688595595] 'process raft request'  (duration: 167.851999ms)"],"step_count":1}
	2024/06/10 12:08:00 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-10T12:11:45.034472Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.434792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-813300-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-06-10T12:11:45.034652Z","caller":"traceutil/trace.go:171","msg":"trace[1392918931] range","detail":"{range_begin:/registry/minions/multinode-813300-m02; range_end:; response_count:1; response_revision:627; }","duration":"372.686393ms","start":"2024-06-10T12:11:44.66195Z","end":"2024-06-10T12:11:45.034637Z","steps":["trace[1392918931] 'range keys from in-memory index tree'  (duration: 372.300191ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:11:45.034806Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T12:11:44.661936Z","time spent":"372.859294ms","remote":"127.0.0.1:55574","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3173,"request content":"key:\"/registry/minions/multinode-813300-m02\" "}
	{"level":"warn","ts":"2024-06-10T12:11:45.03612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.337283ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18126302413705664155 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-813300\" mod_revision:611 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-813300\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-813300\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-10T12:11:45.038666Z","caller":"traceutil/trace.go:171","msg":"trace[807238633] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"254.838757ms","start":"2024-06-10T12:11:44.783815Z","end":"2024-06-10T12:11:45.038654Z","steps":["trace[807238633] 'process raft request'  (duration: 57.529761ms)","trace[807238633] 'compare'  (duration: 193.138277ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T12:13:49.072922Z","caller":"traceutil/trace.go:171","msg":"trace[78076722] transaction","detail":"{read_only:false; response_revision:782; number_of_response:1; }","duration":"148.070995ms","start":"2024-06-10T12:13:48.924834Z","end":"2024-06-10T12:13:49.072905Z","steps":["trace[78076722] 'process raft request'  (duration: 147.862294ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:17:55.333657Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":734}
	{"level":"info","ts":"2024-06-10T12:17:55.355402Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":734,"took":"20.279864ms","hash":1333618607,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2359296,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-10T12:17:55.358022Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1333618607,"revision":734,"compact-revision":-1}
	
	
	==> kernel <==
	 12:19:08 up 13 min,  0 users,  load average: 0.28, 0.25, 0.17
	Linux multinode-813300 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c39d54960e7d] <==
	I0610 12:18:06.237354       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:18:16.244574       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:18:16.244799       1 main.go:227] handling current node
	I0610 12:18:16.244837       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:18:16.244863       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:18:26.258608       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:18:26.258654       1 main.go:227] handling current node
	I0610 12:18:26.258669       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:18:26.258676       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:18:36.264620       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:18:36.264824       1 main.go:227] handling current node
	I0610 12:18:36.264841       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:18:36.264850       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:18:46.275317       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:18:46.275426       1 main.go:227] handling current node
	I0610 12:18:46.275460       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:18:46.275469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:18:56.290965       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:18:56.291027       1 main.go:227] handling current node
	I0610 12:18:56.291041       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:18:56.291048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:19:06.298370       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:19:06.298512       1 main.go:227] handling current node
	I0610 12:19:06.298529       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:19:06.298537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ba52603f8387] <==
	I0610 12:07:59.824973       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0610 12:07:59.841370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.159.171]
	I0610 12:07:59.843233       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:07:59.851566       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:08:00.422415       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0610 12:08:00.612432       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0610 12:08:00.612551       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0610 12:08:00.612582       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.8µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0610 12:08:00.613710       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0610 12:08:00.614096       1 timeout.go:142] post-timeout activity - time-elapsed: 1.826019ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0610 12:08:00.723908       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:08:00.768391       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0610 12:08:00.811944       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:08:14.681862       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0610 12:08:15.551635       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0610 12:12:25.854015       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62544: use of closed network connection
	E0610 12:12:26.395729       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62547: use of closed network connection
	E0610 12:12:27.123198       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62549: use of closed network connection
	E0610 12:12:27.655576       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62551: use of closed network connection
	E0610 12:12:28.202693       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62554: use of closed network connection
	E0610 12:12:28.742674       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62556: use of closed network connection
	E0610 12:12:29.738951       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62559: use of closed network connection
	E0610 12:12:40.298395       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62561: use of closed network connection
	E0610 12:12:40.800091       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62563: use of closed network connection
	E0610 12:12:51.330500       1 conn.go:339] Error on socket receive: read tcp 172.17.159.171:8443->172.17.144.1:62566: use of closed network connection
	
	
	==> kube-controller-manager [f1409bf44ff1] <==
	I0610 12:08:16.024148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.478301ms"
	I0610 12:08:16.151441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.784808ms"
	I0610 12:08:16.151859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="288.402µs"
	I0610 12:08:16.577624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.03545ms"
	I0610 12:08:16.593339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.556101ms"
	I0610 12:08:16.593508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.3µs"
	I0610 12:08:30.535681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130µs"
	I0610 12:08:30.566310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.4µs"
	I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	
	
	==> kube-proxy [afad8b05897e] <==
	I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd1a6cd98743] <==
	W0610 12:07:58.426795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:15:00 multinode-813300 kubelet[2134]: E0610 12:15:00.915435    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:15:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:15:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:15:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:15:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:16:00 multinode-813300 kubelet[2134]: E0610 12:16:00.916678    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:16:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:16:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:16:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:16:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:17:00 multinode-813300 kubelet[2134]: E0610 12:17:00.916733    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:17:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:17:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:17:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:17:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:18:00 multinode-813300 kubelet[2134]: E0610 12:18:00.915818    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:18:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:18:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:18:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:18:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:19:00 multinode-813300 kubelet[2134]: E0610 12:19:00.916413    2134 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:19:00 multinode-813300 kubelet[2134]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:19:00 multinode-813300 kubelet[2134]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:19:00 multinode-813300 kubelet[2134]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:19:00 multinode-813300 kubelet[2134]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:18:59.913712    7280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-813300 -n multinode-813300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-813300 -n multinode-813300: (13.2800046s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-813300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (76.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (521.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-813300
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-813300
E0610 12:28:17.598996    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-813300: (1m43.677827s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-813300 --wait=true -v=8 --alsologtostderr
E0610 12:29:41.888182    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 12:33:17.601370    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:34:41.881416    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-813300 --wait=true -v=8 --alsologtostderr: exit status 1 (6m5.2644961s)

                                                
                                                
-- stdout --
	* [multinode-813300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-813300" primary control-plane node in "multinode-813300" cluster
	* Restarting existing hyperv VM for "multinode-813300" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-813300-m02" worker node in "multinode-813300" cluster
	* Restarting existing hyperv VM for "multinode-813300-m02" ...
	* Found network options:
	  - NO_PROXY=172.17.150.144
	  - NO_PROXY=172.17.150.144
	* Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	  - env NO_PROXY=172.17.150.144

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:28:38.574498    8536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0610 12:28:38.654839    8536 out.go:291] Setting OutFile to fd 604 ...
	I0610 12:28:38.654983    8536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:28:38.654983    8536 out.go:304] Setting ErrFile to fd 880...
	I0610 12:28:38.654983    8536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:28:38.677325    8536 out.go:298] Setting JSON to false
	I0610 12:28:38.680796    8536 start.go:129] hostinfo: {"hostname":"minikube6","uptime":22407,"bootTime":1718000111,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 12:28:38.680796    8536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 12:28:38.877736    8536 out.go:177] * [multinode-813300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 12:28:38.892532    8536 notify.go:220] Checking for updates...
	I0610 12:28:38.906740    8536 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:28:38.929681    8536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:28:38.940798    8536 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 12:28:39.019798    8536 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:28:39.117032    8536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:28:39.164958    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:28:39.165743    8536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:28:45.223549    8536 out.go:177] * Using the hyperv driver based on existing profile
	I0610 12:28:45.237414    8536 start.go:297] selected driver: hyperv
	I0610 12:28:45.237414    8536 start.go:901] validating driver "hyperv" against &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:28:45.238193    8536 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:28:45.295122    8536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:28:45.295122    8536 cni.go:84] Creating CNI manager for ""
	I0610 12:28:45.295122    8536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 12:28:45.295122    8536 start.go:340] cluster config:
	{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:28:45.296067    8536 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:28:45.377354    8536 out.go:177] * Starting "multinode-813300" primary control-plane node in "multinode-813300" cluster
	I0610 12:28:45.415578    8536 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:28:45.416310    8536 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 12:28:45.416389    8536 cache.go:56] Caching tarball of preloaded images
	I0610 12:28:45.416765    8536 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:28:45.417002    8536 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:28:45.417351    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:28:45.420305    8536 start.go:360] acquireMachinesLock for multinode-813300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:28:45.420305    8536 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300"
	I0610 12:28:45.420305    8536 start.go:96] Skipping create...Using existing machine configuration
	I0610 12:28:45.420831    8536 fix.go:54] fixHost starting: 
	I0610 12:28:45.421427    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:28:48.413842    8536 main.go:141] libmachine: [stdout =====>] : Off
	
	I0610 12:28:48.413842    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:28:48.413933    8536 fix.go:112] recreateIfNeeded on multinode-813300: state=Stopped err=<nil>
	W0610 12:28:48.413933    8536 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 12:28:48.416868    8536 out.go:177] * Restarting existing hyperv VM for "multinode-813300" ...
	I0610 12:28:48.420782    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300
	I0610 12:28:51.713723    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:28:51.714356    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:28:51.714356    8536 main.go:141] libmachine: Waiting for host to start...
	I0610 12:28:51.714356    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:28:54.118878    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:28:54.119411    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:28:54.119503    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:28:56.814045    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:28:56.814045    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:28:57.822171    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:00.211852    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:00.211852    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:00.212476    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:02.926524    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:29:02.926524    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:03.937598    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:06.275325    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:06.275325    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:06.275325    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:09.010990    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:29:09.010990    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:10.016228    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:12.410508    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:12.410508    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:12.411443    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:15.181346    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:29:15.181346    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:16.183093    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:18.525084    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:18.525150    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:18.525150    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:21.208775    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:21.208775    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:21.211590    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:23.514717    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:23.514717    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:23.515049    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:26.239801    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:26.240812    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:26.241182    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:29:26.244303    8536 machine.go:94] provisionDockerMachine start ...
	I0610 12:29:26.244413    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:28.530608    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:28.530812    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:28.530812    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:31.282690    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:31.284009    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:31.289874    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:29:31.290002    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:29:31.290002    8536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:29:31.435447    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:29:31.435447    8536 buildroot.go:166] provisioning hostname "multinode-813300"
	I0610 12:29:31.435447    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:33.722919    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:33.722970    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:33.722970    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:36.471690    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:36.472334    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:36.479090    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:29:36.479791    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:29:36.479791    8536 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300 && echo "multinode-813300" | sudo tee /etc/hostname
	I0610 12:29:36.652382    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300
	
	I0610 12:29:36.652514    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:38.983413    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:38.983600    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:38.983600    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:41.749950    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:41.750776    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:41.756940    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:29:41.757629    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:29:41.757629    8536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:29:41.917797    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:29:41.917797    8536 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:29:41.917797    8536 buildroot.go:174] setting up certificates
	I0610 12:29:41.917797    8536 provision.go:84] configureAuth start
	I0610 12:29:41.917797    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:44.213749    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:44.214100    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:44.214282    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:46.967042    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:46.967471    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:46.967471    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:49.312432    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:49.312544    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:49.312651    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:52.090532    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:52.090726    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:52.090726    8536 provision.go:143] copyHostCerts
	I0610 12:29:52.090950    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:29:52.091273    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:29:52.091273    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:29:52.091850    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:29:52.092736    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:29:52.093283    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:29:52.093283    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:29:52.093705    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:29:52.094721    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:29:52.094998    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:29:52.095097    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:29:52.095432    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:29:52.096118    8536 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300 san=[127.0.0.1 172.17.150.144 localhost minikube multinode-813300]
	I0610 12:29:52.185188    8536 provision.go:177] copyRemoteCerts
	I0610 12:29:52.203551    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:29:52.203551    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:54.528062    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:54.528062    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:54.528376    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:57.219889    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:57.219889    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:57.221301    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:29:57.334411    8536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1308185s)
	I0610 12:29:57.334411    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:29:57.335128    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:29:57.388855    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:29:57.389417    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 12:29:57.440865    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:29:57.440865    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 12:29:57.485942    8536 provision.go:87] duration metric: took 15.5680194s to configureAuth
	I0610 12:29:57.485942    8536 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:29:57.486840    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:29:57.486978    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:59.788145    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:59.788186    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:59.788282    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:02.552883    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:02.552883    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:02.558354    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:02.558354    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:02.558940    8536 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:30:02.696563    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:30:02.696563    8536 buildroot.go:70] root file system type: tmpfs
	I0610 12:30:02.696831    8536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:30:02.696831    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:04.985348    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:04.986116    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:04.986116    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:07.764990    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:07.764990    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:07.771821    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:07.772272    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:07.772416    8536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:30:07.947905    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:30:07.947905    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:10.229229    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:10.229229    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:10.229735    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:12.986954    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:12.986954    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:12.993556    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:12.994271    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:12.994271    8536 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:30:15.629392    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:30:15.629510    8536 machine.go:97] duration metric: took 49.3846172s to provisionDockerMachine
	I0610 12:30:15.629551    8536 start.go:293] postStartSetup for "multinode-813300" (driver="hyperv")
	I0610 12:30:15.629551    8536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:30:15.643606    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:30:15.643606    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:17.924737    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:17.924737    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:17.925039    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:20.737689    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:20.737689    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:20.738451    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:30:20.861148    8536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2174997s)
	I0610 12:30:20.878070    8536 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:30:20.886140    8536 command_runner.go:130] > NAME=Buildroot
	I0610 12:30:20.886261    8536 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:30:20.886261    8536 command_runner.go:130] > ID=buildroot
	I0610 12:30:20.886261    8536 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:30:20.886261    8536 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:30:20.886261    8536 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:30:20.886261    8536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:30:20.886912    8536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:30:20.887780    8536 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:30:20.887780    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:30:20.901192    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:30:20.919463    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:30:20.970028    8536 start.go:296] duration metric: took 5.3404341s for postStartSetup
	I0610 12:30:20.970028    8536 fix.go:56] duration metric: took 1m35.5489487s for fixHost
	I0610 12:30:20.970028    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:23.358856    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:23.358921    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:23.358921    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:26.123102    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:26.123102    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:26.130849    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:26.131005    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:26.131005    8536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 12:30:26.270831    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718022626.258816297
	
	I0610 12:30:26.270974    8536 fix.go:216] guest clock: 1718022626.258816297
	I0610 12:30:26.270974    8536 fix.go:229] Guest: 2024-06-10 12:30:26.258816297 +0000 UTC Remote: 2024-06-10 12:30:20.9700283 +0000 UTC m=+102.488567101 (delta=5.288787997s)
	I0610 12:30:26.271118    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:28.609922    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:28.610596    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:28.610596    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:31.337885    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:31.337885    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:31.346928    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:31.346928    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:31.346928    8536 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718022626
	I0610 12:30:31.500608    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:30:26 UTC 2024
	
	I0610 12:30:31.500691    8536 fix.go:236] clock set: Mon Jun 10 12:30:26 UTC 2024
	 (err=<nil>)
	I0610 12:30:31.500691    8536 start.go:83] releasing machines lock for "multinode-813300", held for 1m46.0795262s
	I0610 12:30:31.501016    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:33.776460    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:33.777056    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:33.777056    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:36.554030    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:36.554635    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:36.559082    8536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:30:36.559240    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:36.570714    8536 ssh_runner.go:195] Run: cat /version.json
	I0610 12:30:36.570714    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:38.925758    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:38.925758    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:38.926098    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:38.926198    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:38.926198    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:38.926198    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:41.773204    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:41.773400    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:41.773400    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:30:41.799540    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:41.799651    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:41.800007    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:30:41.872338    8536 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 12:30:41.872539    8536 ssh_runner.go:235] Completed: cat /version.json: (5.3017825s)
	I0610 12:30:41.885396    8536 ssh_runner.go:195] Run: systemctl --version
	I0610 12:30:42.101945    8536 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:30:42.103122    8536 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5428188s)
	I0610 12:30:42.103159    8536 command_runner.go:130] > systemd 252 (252)
	I0610 12:30:42.103303    8536 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 12:30:42.114776    8536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:30:42.123977    8536 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 12:30:42.124798    8536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:30:42.136387    8536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:30:42.165177    8536 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:30:42.165177    8536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:30:42.165320    8536 start.go:494] detecting cgroup driver to use...
	I0610 12:30:42.165521    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:30:42.212062    8536 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:30:42.226437    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:30:42.258211    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:30:42.278902    8536 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:30:42.289535    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:30:42.323665    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:30:42.355027    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:30:42.386171    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:30:42.423508    8536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:30:42.464119    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:30:42.497561    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:30:42.529363    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:30:42.559375    8536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:30:42.578798    8536 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:30:42.589359    8536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:30:42.619653    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:42.830921    8536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:30:42.862669    8536 start.go:494] detecting cgroup driver to use...
	I0610 12:30:42.874483    8536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:30:42.899477    8536 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:30:42.899477    8536 command_runner.go:130] > [Unit]
	I0610 12:30:42.899846    8536 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:30:42.899846    8536 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:30:42.899846    8536 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:30:42.899846    8536 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:30:42.899846    8536 command_runner.go:130] > StartLimitBurst=3
	I0610 12:30:42.899846    8536 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:30:42.899846    8536 command_runner.go:130] > [Service]
	I0610 12:30:42.899846    8536 command_runner.go:130] > Type=notify
	I0610 12:30:42.899846    8536 command_runner.go:130] > Restart=on-failure
	I0610 12:30:42.899846    8536 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:30:42.899983    8536 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:30:42.899983    8536 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:30:42.899983    8536 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:30:42.899983    8536 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:30:42.900028    8536 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:30:42.900028    8536 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:30:42.900068    8536 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:30:42.900091    8536 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:30:42.900091    8536 command_runner.go:130] > ExecStart=
	I0610 12:30:42.900091    8536 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:30:42.900091    8536 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:30:42.900164    8536 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:30:42.900190    8536 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:30:42.900190    8536 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:30:42.900220    8536 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:30:42.900220    8536 command_runner.go:130] > LimitCORE=infinity
	I0610 12:30:42.900220    8536 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:30:42.900220    8536 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:30:42.900220    8536 command_runner.go:130] > TasksMax=infinity
	I0610 12:30:42.900220    8536 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:30:42.900220    8536 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:30:42.900220    8536 command_runner.go:130] > Delegate=yes
	I0610 12:30:42.900220    8536 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:30:42.900220    8536 command_runner.go:130] > KillMode=process
	I0610 12:30:42.900220    8536 command_runner.go:130] > [Install]
	I0610 12:30:42.900220    8536 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:30:42.914316    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:30:42.958298    8536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:30:43.008354    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:30:43.046473    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:30:43.085725    8536 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:30:43.163345    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:30:43.192848    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:30:43.236715    8536 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:30:43.248701    8536 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:30:43.254691    8536 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:30:43.272660    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:30:43.293585    8536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:30:43.346468    8536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:30:43.587661    8536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:30:43.790758    8536 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:30:43.791070    8536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:30:43.841161    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:44.070472    8536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:30:46.791330    8536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7205702s)
	I0610 12:30:46.803685    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:30:46.840565    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:30:46.877595    8536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:30:47.102484    8536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:30:47.324886    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:47.556726    8536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:30:47.597477    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:30:47.633945    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:47.854989    8536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:30:47.967140    8536 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:30:47.982432    8536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:30:47.991114    8536 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:30:47.991114    8536 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:30:47.991114    8536 command_runner.go:130] > Device: 0,22	Inode: 840         Links: 1
	I0610 12:30:47.991114    8536 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:30:47.991114    8536 command_runner.go:130] > Access: 2024-06-10 12:30:47.879784912 +0000
	I0610 12:30:47.991114    8536 command_runner.go:130] > Modify: 2024-06-10 12:30:47.879784912 +0000
	I0610 12:30:47.991114    8536 command_runner.go:130] > Change: 2024-06-10 12:30:47.884785012 +0000
	I0610 12:30:47.991114    8536 command_runner.go:130] >  Birth: -
	I0610 12:30:47.991114    8536 start.go:562] Will wait 60s for crictl version
	I0610 12:30:48.003665    8536 ssh_runner.go:195] Run: which crictl
	I0610 12:30:48.009966    8536 command_runner.go:130] > /usr/bin/crictl
	I0610 12:30:48.021821    8536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:30:48.091336    8536 command_runner.go:130] > Version:  0.1.0
	I0610 12:30:48.091336    8536 command_runner.go:130] > RuntimeName:  docker
	I0610 12:30:48.091336    8536 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:30:48.091336    8536 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:30:48.091336    8536 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:30:48.101403    8536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:30:48.140012    8536 command_runner.go:130] > 26.1.4
	I0610 12:30:48.149987    8536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:30:48.185101    8536 command_runner.go:130] > 26.1.4
	I0610 12:30:48.193254    8536 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:30:48.193254    8536 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:30:48.196260    8536 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:30:48.196260    8536 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:30:48.196260    8536 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:30:48.196260    8536 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:30:48.201426    8536 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:30:48.201426    8536 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:30:48.213676    8536 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:30:48.220961    8536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:30:48.244733    8536 kubeadm.go:877] updating cluster {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.150.144 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 12:30:48.245500    8536 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:30:48.254297    8536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:30:48.284201    8536 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 12:30:48.284874    8536 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 12:30:48.284874    8536 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 12:30:48.284874    8536 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 12:30:48.284974    8536 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 12:30:48.284974    8536 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 12:30:48.284974    8536 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 12:30:48.284974    8536 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 12:30:48.284974    8536 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:30:48.284974    8536 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 12:30:48.285131    8536 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 12:30:48.285131    8536 docker.go:615] Images already preloaded, skipping extraction
	I0610 12:30:48.295523    8536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 12:30:48.327822    8536 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:30:48.327822    8536 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 12:30:48.327822    8536 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 12:30:48.327822    8536 cache_images.go:84] Images are preloaded, skipping loading
	I0610 12:30:48.328349    8536 kubeadm.go:928] updating node { 172.17.150.144 8443 v1.30.1 docker true true} ...
	I0610 12:30:48.328393    8536 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.150.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:30:48.336375    8536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 12:30:48.379653    8536 command_runner.go:130] > cgroupfs
	I0610 12:30:48.379653    8536 cni.go:84] Creating CNI manager for ""
	I0610 12:30:48.379653    8536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 12:30:48.379653    8536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 12:30:48.379653    8536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.150.144 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-813300 NodeName:multinode-813300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.150.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.150.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 12:30:48.379653    8536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.150.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-813300"
	  kubeletExtraArgs:
	    node-ip: 172.17.150.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.150.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 12:30:48.393675    8536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:30:48.416114    8536 command_runner.go:130] > kubeadm
	I0610 12:30:48.416114    8536 command_runner.go:130] > kubectl
	I0610 12:30:48.416114    8536 command_runner.go:130] > kubelet
	I0610 12:30:48.416184    8536 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 12:30:48.429880    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 12:30:48.452913    8536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0610 12:30:48.483630    8536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:30:48.517007    8536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0610 12:30:48.570463    8536 ssh_runner.go:195] Run: grep 172.17.150.144	control-plane.minikube.internal$ /etc/hosts
	I0610 12:30:48.577138    8536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.150.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:30:48.611992    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:48.834153    8536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:30:48.868245    8536 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.150.144
	I0610 12:30:48.868329    8536 certs.go:194] generating shared ca certs ...
	I0610 12:30:48.868374    8536 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:30:48.869175    8536 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:30:48.869443    8536 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:30:48.869970    8536 certs.go:256] generating profile certs ...
	I0610 12:30:48.870826    8536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key
	I0610 12:30:48.870826    8536 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.18129446
	I0610 12:30:48.870826    8536 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.18129446 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.150.144]
	I0610 12:30:48.967326    8536 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.18129446 ...
	I0610 12:30:48.967326    8536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.18129446: {Name:mk10a39c5392a50c9be23655c99ab50aa79910fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:30:48.969338    8536 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.18129446 ...
	I0610 12:30:48.969338    8536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.18129446: {Name:mk84e846335431ca2dddd39c9c8847a448320834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:30:48.969619    8536 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.18129446 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt
	I0610 12:30:48.983700    8536 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.18129446 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key
	I0610 12:30:48.984855    8536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key
	I0610 12:30:48.985403    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:30:48.985496    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:30:48.985496    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:30:48.985496    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:30:48.986120    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 12:30:48.986120    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 12:30:48.986120    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 12:30:48.986654    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 12:30:48.987243    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:30:48.987578    8536 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:30:48.987695    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:30:48.987985    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:30:48.988116    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:30:48.988116    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:30:48.989041    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:30:48.989319    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:30:48.989343    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:30:48.989343    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:48.991127    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:30:49.045283    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:30:49.096175    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:30:49.146219    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:30:49.199394    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 12:30:49.252212    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 12:30:49.304181    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 12:30:49.369323    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 12:30:49.425787    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:30:49.474507    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:30:49.527167    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:30:49.575904    8536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 12:30:49.627713    8536 ssh_runner.go:195] Run: openssl version
	I0610 12:30:49.638196    8536 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:30:49.651705    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:30:49.683246    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:49.690437    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:49.690437    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:49.703965    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:49.713866    8536 command_runner.go:130] > b5213941
	I0610 12:30:49.725992    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:30:49.758905    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:30:49.790270    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:30:49.800463    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:30:49.800608    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:30:49.815877    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:30:49.826959    8536 command_runner.go:130] > 51391683
	I0610 12:30:49.839053    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:30:49.870738    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:30:49.910794    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:30:49.923102    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:30:49.923102    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:30:49.935320    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:30:49.945061    8536 command_runner.go:130] > 3ec20f2e
	I0610 12:30:49.957426    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:30:49.993286    8536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:30:50.004954    8536 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:30:50.005028    8536 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0610 12:30:50.005028    8536 command_runner.go:130] > Device: 8,1	Inode: 5243218     Links: 1
	I0610 12:30:50.005028    8536 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 12:30:50.005028    8536 command_runner.go:130] > Access: 2024-06-10 12:07:48.567870685 +0000
	I0610 12:30:50.005211    8536 command_runner.go:130] > Modify: 2024-06-10 12:07:48.567870685 +0000
	I0610 12:30:50.005280    8536 command_runner.go:130] > Change: 2024-06-10 12:07:48.567870685 +0000
	I0610 12:30:50.005280    8536 command_runner.go:130] >  Birth: 2024-06-10 12:07:48.567870685 +0000
	I0610 12:30:50.019875    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 12:30:50.030991    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.043975    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 12:30:50.060811    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.071748    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 12:30:50.084717    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.095710    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 12:30:50.105170    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.116979    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 12:30:50.126077    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.138413    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 12:30:50.147367    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.147929    8536 kubeadm.go:391] StartCluster: {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.150.144 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:30:50.156587    8536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 12:30:50.190885    8536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 12:30:50.208685    8536 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0610 12:30:50.208912    8536 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0610 12:30:50.208912    8536 command_runner.go:130] > /var/lib/minikube/etcd:
	I0610 12:30:50.208912    8536 command_runner.go:130] > member
	W0610 12:30:50.208912    8536 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 12:30:50.208912    8536 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 12:30:50.208912    8536 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 12:30:50.221129    8536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 12:30:50.246391    8536 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 12:30:50.247783    8536 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-813300" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:30:50.248308    8536 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-813300" cluster setting kubeconfig missing "multinode-813300" context setting]
	I0610 12:30:50.249269    8536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:30:50.264266    8536 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:30:50.265683    8536 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.150.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:30:50.267447    8536 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 12:30:50.279479    8536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 12:30:50.299920    8536 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0610 12:30:50.299983    8536 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0610 12:30:50.299983    8536 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0610 12:30:50.300044    8536 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0610 12:30:50.300079    8536 command_runner.go:130] >  kind: InitConfiguration
	I0610 12:30:50.300079    8536 command_runner.go:130] >  localAPIEndpoint:
	I0610 12:30:50.300079    8536 command_runner.go:130] > -  advertiseAddress: 172.17.159.171
	I0610 12:30:50.300079    8536 command_runner.go:130] > +  advertiseAddress: 172.17.150.144
	I0610 12:30:50.300135    8536 command_runner.go:130] >    bindPort: 8443
	I0610 12:30:50.300135    8536 command_runner.go:130] >  bootstrapTokens:
	I0610 12:30:50.300160    8536 command_runner.go:130] >    - groups:
	I0610 12:30:50.300160    8536 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0610 12:30:50.300160    8536 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0610 12:30:50.300160    8536 command_runner.go:130] >    name: "multinode-813300"
	I0610 12:30:50.300238    8536 command_runner.go:130] >    kubeletExtraArgs:
	I0610 12:30:50.300238    8536 command_runner.go:130] > -    node-ip: 172.17.159.171
	I0610 12:30:50.300238    8536 command_runner.go:130] > +    node-ip: 172.17.150.144
	I0610 12:30:50.300238    8536 command_runner.go:130] >    taints: []
	I0610 12:30:50.300238    8536 command_runner.go:130] >  ---
	I0610 12:30:50.300339    8536 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0610 12:30:50.300339    8536 command_runner.go:130] >  kind: ClusterConfiguration
	I0610 12:30:50.300339    8536 command_runner.go:130] >  apiServer:
	I0610 12:30:50.300339    8536 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.159.171"]
	I0610 12:30:50.300339    8536 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.150.144"]
	I0610 12:30:50.300423    8536 command_runner.go:130] >    extraArgs:
	I0610 12:30:50.300450    8536 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0610 12:30:50.300450    8536 command_runner.go:130] >  controllerManager:
	I0610 12:30:50.300450    8536 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.159.171
	+  advertiseAddress: 172.17.150.144
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-813300"
	   kubeletExtraArgs:
	-    node-ip: 172.17.159.171
	+    node-ip: 172.17.150.144
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.159.171"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.150.144"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0610 12:30:50.300450    8536 kubeadm.go:1154] stopping kube-system containers ...
	I0610 12:30:50.308031    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 12:30:50.337037    8536 command_runner.go:130] > f2e39052db19
	I0610 12:30:50.337649    8536 command_runner.go:130] > d32ce22e31b0
	I0610 12:30:50.337649    8536 command_runner.go:130] > a0bc6043f7b9
	I0610 12:30:50.337649    8536 command_runner.go:130] > a1ae7aed0067
	I0610 12:30:50.337649    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:30:50.337649    8536 command_runner.go:130] > afad8b05897e
	I0610 12:30:50.337649    8536 command_runner.go:130] > 689b8976cc02
	I0610 12:30:50.337649    8536 command_runner.go:130] > 62db1c721951
	I0610 12:30:50.337649    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:30:50.337649    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:30:50.337649    8536 command_runner.go:130] > 34b9299d74e3
	I0610 12:30:50.337649    8536 command_runner.go:130] > ba52603f8387
	I0610 12:30:50.337649    8536 command_runner.go:130] > f04d7b3d4fcc
	I0610 12:30:50.337649    8536 command_runner.go:130] > c7d28a97ba1c
	I0610 12:30:50.337649    8536 command_runner.go:130] > e3b6aa9a0e1d
	I0610 12:30:50.337649    8536 command_runner.go:130] > a10e49596de5
	I0610 12:30:50.339006    8536 docker.go:483] Stopping containers: [f2e39052db19 d32ce22e31b0 a0bc6043f7b9 a1ae7aed0067 c39d54960e7d afad8b05897e 689b8976cc02 62db1c721951 bd1a6cd98743 f1409bf44ff1 34b9299d74e3 ba52603f8387 f04d7b3d4fcc c7d28a97ba1c e3b6aa9a0e1d a10e49596de5]
	I0610 12:30:50.350377    8536 ssh_runner.go:195] Run: docker stop f2e39052db19 d32ce22e31b0 a0bc6043f7b9 a1ae7aed0067 c39d54960e7d afad8b05897e 689b8976cc02 62db1c721951 bd1a6cd98743 f1409bf44ff1 34b9299d74e3 ba52603f8387 f04d7b3d4fcc c7d28a97ba1c e3b6aa9a0e1d a10e49596de5
	I0610 12:30:50.383440    8536 command_runner.go:130] > f2e39052db19
	I0610 12:30:50.383440    8536 command_runner.go:130] > d32ce22e31b0
	I0610 12:30:50.383440    8536 command_runner.go:130] > a0bc6043f7b9
	I0610 12:30:50.383440    8536 command_runner.go:130] > a1ae7aed0067
	I0610 12:30:50.383440    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:30:50.383440    8536 command_runner.go:130] > afad8b05897e
	I0610 12:30:50.383440    8536 command_runner.go:130] > 689b8976cc02
	I0610 12:30:50.383440    8536 command_runner.go:130] > 62db1c721951
	I0610 12:30:50.383440    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:30:50.383440    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:30:50.383440    8536 command_runner.go:130] > 34b9299d74e3
	I0610 12:30:50.383440    8536 command_runner.go:130] > ba52603f8387
	I0610 12:30:50.383440    8536 command_runner.go:130] > f04d7b3d4fcc
	I0610 12:30:50.383440    8536 command_runner.go:130] > c7d28a97ba1c
	I0610 12:30:50.383440    8536 command_runner.go:130] > e3b6aa9a0e1d
	I0610 12:30:50.383440    8536 command_runner.go:130] > a10e49596de5
	I0610 12:30:50.397012    8536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 12:30:50.443003    8536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 12:30:50.463699    8536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 12:30:50.463860    8536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 12:30:50.463860    8536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 12:30:50.463860    8536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:30:50.464086    8536 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:30:50.464167    8536 kubeadm.go:156] found existing configuration files:
	
	I0610 12:30:50.477350    8536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 12:30:50.496838    8536 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:30:50.496838    8536 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:30:50.507829    8536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 12:30:50.548835    8536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 12:30:50.568660    8536 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:30:50.568660    8536 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:30:50.580851    8536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 12:30:50.611996    8536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 12:30:50.629155    8536 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:30:50.629155    8536 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:30:50.640648    8536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 12:30:50.673025    8536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 12:30:50.689528    8536 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:30:50.690156    8536 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:30:50.701757    8536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 12:30:50.733605    8536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 12:30:50.750642    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using the existing "sa" key
	I0610 12:30:51.050154    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:53.559657    8536 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:30:53.560937    8536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.5104503s)
	I0610 12:30:53.560937    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:53.676924    8536 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:30:53.679941    8536 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:30:53.680103    8536 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 12:30:53.906932    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:54.006693    8536 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:30:54.006693    8536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:30:54.006693    8536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:30:54.006693    8536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:30:54.006814    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:54.116485    8536 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:30:54.116615    8536 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:30:54.128579    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:54.639320    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:55.147507    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:55.645247    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:56.145320    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:56.174980    8536 command_runner.go:130] > 1892
	I0610 12:30:56.175127    8536 api_server.go:72] duration metric: took 2.0584961s to wait for apiserver process to appear ...
	I0610 12:30:56.175220    8536 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:30:56.175332    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:30:59.397470    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 12:30:59.398212    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 12:30:59.398212    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:30:59.485722    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 12:30:59.485722    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 12:30:59.677153    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:30:59.685073    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 12:30:59.685073    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 12:31:00.178702    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:31:00.189602    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 12:31:00.189713    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 12:31:00.685079    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:31:00.693473    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 12:31:00.693473    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 12:31:01.175816    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:31:01.182969    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 200:
	ok
	I0610 12:31:01.182969    8536 round_trippers.go:463] GET https://172.17.150.144:8443/version
	I0610 12:31:01.182969    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:01.182969    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:01.182969    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:01.194421    8536 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 12:31:01.194701    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:01.194701    8536 round_trippers.go:580]     Audit-Id: bdef7251-952d-4176-808e-102f8bc9bca4
	I0610 12:31:01.194701    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:01.194767    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:01.194767    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:01.194810    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:01.194810    8536 round_trippers.go:580]     Content-Length: 263
	I0610 12:31:01.194838    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:01 GMT
	I0610 12:31:01.194914    8536 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 12:31:01.194914    8536 api_server.go:141] control plane version: v1.30.1
	I0610 12:31:01.194914    8536 api_server.go:131] duration metric: took 5.0196532s to wait for apiserver health ...
	I0610 12:31:01.194914    8536 cni.go:84] Creating CNI manager for ""
	I0610 12:31:01.194914    8536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 12:31:01.198299    8536 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 12:31:01.216425    8536 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 12:31:01.225408    8536 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 12:31:01.225408    8536 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0610 12:31:01.225408    8536 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0610 12:31:01.225408    8536 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 12:31:01.225408    8536 command_runner.go:130] > Access: 2024-06-10 12:29:17.417483400 +0000
	I0610 12:31:01.225408    8536 command_runner.go:130] > Modify: 2024-06-06 15:35:25.000000000 +0000
	I0610 12:31:01.225408    8536 command_runner.go:130] > Change: 2024-06-10 12:29:06.186000000 +0000
	I0610 12:31:01.225408    8536 command_runner.go:130] >  Birth: -
	I0610 12:31:01.226407    8536 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 12:31:01.226407    8536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 12:31:01.303294    8536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 12:31:02.478704    8536 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0610 12:31:02.478841    8536 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0610 12:31:02.478841    8536 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0610 12:31:02.478841    8536 command_runner.go:130] > daemonset.apps/kindnet configured
	I0610 12:31:02.479112    8536 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1758084s)
	I0610 12:31:02.479112    8536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:31:02.479112    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:31:02.479112    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.479112    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.479112    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.485944    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:02.485944    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.485944    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.485944    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.485944    8536 round_trippers.go:580]     Audit-Id: 14fde666-ec61-46cb-bd29-b228dcf0a637
	I0610 12:31:02.485944    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.485944    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.485944    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.487924    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1666"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87830 chars]
	I0610 12:31:02.494908    8536 system_pods.go:59] 12 kube-system pods found
	I0610 12:31:02.494908    8536 system_pods.go:61] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 12:31:02.494908    8536 system_pods.go:61] "etcd-multinode-813300" [f9259e5e-61e9-4252-b7c6-de5d499eb9c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 12:31:02.494908    8536 system_pods.go:61] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0610 12:31:02.494908    8536 system_pods.go:61] "kindnet-2pc4j" [966ce4c1-e9ee-48d6-9e52-98143fa03e67] Running
	I0610 12:31:02.494908    8536 system_pods.go:61] "kindnet-r4nfq" [dceb3d20-8d04-4408-927f-1c195558dd19] Running
	I0610 12:31:02.494908    8536 system_pods.go:61] "kube-apiserver-multinode-813300" [2cf29b2c-a2a9-46ec-bbc8-fe884e97df06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 12:31:02.494908    8536 system_pods.go:61] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 12:31:02.494908    8536 system_pods.go:61] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:31:02.494908    8536 system_pods.go:61] "kube-proxy-rx2b2" [ce59a99b-a561-4598-9399-147f748433a2] Running
	I0610 12:31:02.495900    8536 system_pods.go:61] "kube-proxy-vw56h" [f3f9e738-89d2-4776-a212-a1ca28952f7c] Running
	I0610 12:31:02.495900    8536 system_pods.go:61] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 12:31:02.495900    8536 system_pods.go:61] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:31:02.495900    8536 system_pods.go:74] duration metric: took 16.7882ms to wait for pod list to return data ...
	I0610 12:31:02.495900    8536 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:31:02.495900    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes
	I0610 12:31:02.495900    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.495900    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.495900    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.500905    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:02.500905    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.501060    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.501060    8536 round_trippers.go:580]     Audit-Id: b1f4f287-acba-409f-8a8d-4d6717d703d2
	I0610 12:31:02.501060    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.501060    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.501060    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.501060    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.501836    8536 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1666"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16303 chars]
	I0610 12:31:02.503048    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:31:02.503101    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:31:02.503101    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:31:02.503101    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:31:02.503101    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:31:02.503101    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:31:02.503101    8536 node_conditions.go:105] duration metric: took 7.2006ms to run NodePressure ...
	I0610 12:31:02.503101    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:31:02.873571    8536 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 12:31:02.873571    8536 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 12:31:02.873571    8536 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 12:31:02.874581    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0610 12:31:02.874581    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.874581    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.874581    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.879570    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:02.879570    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.879570    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.879570    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.879570    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.879570    8536 round_trippers.go:580]     Audit-Id: 79a391e0-8be4-4aaa-beb0-e00a33e8b2c4
	I0610 12:31:02.879570    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.879570    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.880291    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1668"},"items":[{"metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"f9259e5e-61e9-4252-b7c6-de5d499eb9c1","resourceVersion":"1659","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.150.144:2379","kubernetes.io/config.hash":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.mirror":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.seen":"2024-06-10T12:30:54.120335207Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0610 12:31:02.882044    8536 kubeadm.go:733] kubelet initialised
	I0610 12:31:02.882044    8536 kubeadm.go:734] duration metric: took 8.473ms waiting for restarted kubelet to initialise ...
	I0610 12:31:02.882044    8536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:31:02.882044    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:31:02.882044    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.882044    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.882044    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.887039    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:02.887039    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.887669    8536 round_trippers.go:580]     Audit-Id: 645d917a-adee-47ee-a51a-10c345996109
	I0610 12:31:02.887669    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.887669    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.887669    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.887669    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.887669    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.889974    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1668"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87830 chars]
	I0610 12:31:02.895373    8536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:02.895494    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:02.895494    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.895494    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.895494    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.898800    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:02.898800    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.898800    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.898800    8536 round_trippers.go:580]     Audit-Id: 391001a6-7791-41f1-879f-b91a5ae733fc
	I0610 12:31:02.898800    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.898800    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.898800    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.898800    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.899331    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:02.899508    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:02.899508    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.899508    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.899508    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.902367    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:02.902367    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.902367    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.902367    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.902367    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.902367    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.902367    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.902367    8536 round_trippers.go:580]     Audit-Id: dfa7686e-65f7-4049-8bf9-d729b9f92192
	I0610 12:31:02.903372    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:02.903372    8536 pod_ready.go:97] node "multinode-813300" hosting pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.903372    8536 pod_ready.go:81] duration metric: took 7.9986ms for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:02.903372    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.903372    8536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:02.903372    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:31:02.903372    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.903372    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.903372    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.906413    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:02.906413    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.906413    8536 round_trippers.go:580]     Audit-Id: efd0c92b-050e-4757-994b-e754b554d826
	I0610 12:31:02.906413    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.906413    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.906413    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.906710    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.906710    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.906923    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"f9259e5e-61e9-4252-b7c6-de5d499eb9c1","resourceVersion":"1659","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.150.144:2379","kubernetes.io/config.hash":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.mirror":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.seen":"2024-06-10T12:30:54.120335207Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0610 12:31:02.907621    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:02.907621    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.907621    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.907621    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.910397    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:02.910397    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.910455    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.910455    8536 round_trippers.go:580]     Audit-Id: 8150a705-5a3f-42c3-99d7-c74227871cc0
	I0610 12:31:02.910455    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.910455    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.910455    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.910455    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.910571    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:02.910571    8536 pod_ready.go:97] node "multinode-813300" hosting pod "etcd-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.911095    8536 pod_ready.go:81] duration metric: took 7.1996ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:02.911095    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "etcd-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.911095    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:02.911246    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:31:02.911246    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.911293    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.911293    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.913989    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:02.913989    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.914172    8536 round_trippers.go:580]     Audit-Id: 45056189-f5f9-49cb-bb3a-797c61c8592f
	I0610 12:31:02.914172    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.914172    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.914172    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.914172    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.914172    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.914356    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"2cf29b2c-a2a9-46ec-bbc8-fe884e97df06","resourceVersion":"1655","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.150.144:8443","kubernetes.io/config.hash":"180cf4cc399d604c28cc4df1442ebd5a","kubernetes.io/config.mirror":"180cf4cc399d604c28cc4df1442ebd5a","kubernetes.io/config.seen":"2024-06-10T12:30:54.115839018Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0610 12:31:02.914924    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:02.915022    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.915022    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.915022    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.916930    8536 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 12:31:02.916930    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.916930    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.916930    8536 round_trippers.go:580]     Audit-Id: a410d13c-ec8b-40ab-a942-83be9c65946f
	I0610 12:31:02.916930    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.916930    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.916930    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.916930    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.916930    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:02.917925    8536 pod_ready.go:97] node "multinode-813300" hosting pod "kube-apiserver-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.917925    8536 pod_ready.go:81] duration metric: took 6.8295ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:02.917925    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "kube-apiserver-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.917925    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:02.917925    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:31:02.917925    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.917925    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.917925    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.920935    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:02.921515    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.921515    8536 round_trippers.go:580]     Audit-Id: 05339cf1-4ebb-4088-a220-d700387f99fd
	I0610 12:31:02.921515    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.921515    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.921515    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.921584    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.921584    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.921999    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"1654","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0610 12:31:02.922227    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:02.922227    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.922227    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.922227    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.928116    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:02.928116    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.928216    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.928216    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.928216    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.928216    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.928216    8536 round_trippers.go:580]     Audit-Id: a74cf69d-2660-457a-bf90-45955074ce7b
	I0610 12:31:02.928216    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.928273    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:02.928984    8536 pod_ready.go:97] node "multinode-813300" hosting pod "kube-controller-manager-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.929039    8536 pod_ready.go:81] duration metric: took 11.1144ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:02.929039    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "kube-controller-manager-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.929039    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:03.093997    8536 request.go:629] Waited for 164.9567ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:31:03.093997    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:31:03.093997    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.093997    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.093997    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.103604    8536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 12:31:03.103604    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.103604    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.103604    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.103604    8536 round_trippers.go:580]     Audit-Id: aad52ca2-9c19-4c33-83f0-ff570cea1992
	I0610 12:31:03.103604    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.103604    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.103604    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.104857    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"1665","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0610 12:31:03.282628    8536 request.go:629] Waited for 177.0208ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:03.283099    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:03.283099    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.283099    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.283099    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.288630    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:03.289026    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.289063    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.289063    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.289129    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.289129    8536 round_trippers.go:580]     Audit-Id: 596762d1-fe79-4da7-982f-3b1e85edaa26
	I0610 12:31:03.289162    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.289162    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.289408    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:03.289966    8536 pod_ready.go:97] node "multinode-813300" hosting pod "kube-proxy-nrpvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:03.289966    8536 pod_ready.go:81] duration metric: took 360.9241ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:03.289966    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "kube-proxy-nrpvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:03.289966    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:03.484026    8536 request.go:629] Waited for 194.0586ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:31:03.484462    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:31:03.484462    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.484521    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.484547    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.488022    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:03.488022    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.488022    8536 round_trippers.go:580]     Audit-Id: e3e76be0-0f77-481c-bb74-ff01b44ee288
	I0610 12:31:03.489016    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.489016    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.489016    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.489016    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.489016    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.489259    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rx2b2","generateName":"kube-proxy-","namespace":"kube-system","uid":"ce59a99b-a561-4598-9399-147f748433a2","resourceVersion":"1632","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0610 12:31:03.686269    8536 request.go:629] Waited for 196.2702ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:31:03.686482    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:31:03.686482    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.686482    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.686482    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.693405    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:03.693561    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.693561    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.693561    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.693561    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.693561    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.693561    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.693561    8536 round_trippers.go:580]     Audit-Id: 30e43582-bd8a-4f69-8a52-a61f15374c7f
	I0610 12:31:03.693561    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"1628","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4486 chars]
	I0610 12:31:03.694336    8536 pod_ready.go:97] node "multinode-813300-m02" hosting pod "kube-proxy-rx2b2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m02" has status "Ready":"Unknown"
	I0610 12:31:03.694336    8536 pod_ready.go:81] duration metric: took 404.3666ms for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:03.694336    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300-m02" hosting pod "kube-proxy-rx2b2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m02" has status "Ready":"Unknown"
	I0610 12:31:03.694336    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vw56h" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:03.888087    8536 request.go:629] Waited for 193.7498ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vw56h
	I0610 12:31:03.888457    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vw56h
	I0610 12:31:03.888705    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.888705    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.888705    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.892282    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:03.892709    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.892709    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.892709    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.892709    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.892709    8536 round_trippers.go:580]     Audit-Id: 736fba8c-c8d1-49ab-9c03-8765cad1c045
	I0610 12:31:03.892709    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.892709    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.893266    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vw56h","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3f9e738-89d2-4776-a212-a1ca28952f7c","resourceVersion":"1595","creationTimestamp":"2024-06-10T12:25:52Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0610 12:31:04.093768    8536 request.go:629] Waited for 199.281ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m03
	I0610 12:31:04.093889    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m03
	I0610 12:31:04.093889    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:04.094038    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:04.094038    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:04.098761    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:04.098761    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:04.098761    8536 round_trippers.go:580]     Audit-Id: 62398428-3155-4df8-b2fb-6886a46ac3b0
	I0610 12:31:04.098761    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:04.098895    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:04.098895    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:04.098895    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:04.098895    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:04 GMT
	I0610 12:31:04.099264    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m03","uid":"7d0b0b62-45c8-40aa-9f7a-5bb189395355","resourceVersion":"1603","creationTimestamp":"2024-06-10T12:25:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_25_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4413 chars]
	I0610 12:31:04.099721    8536 pod_ready.go:97] node "multinode-813300-m03" hosting pod "kube-proxy-vw56h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m03" has status "Ready":"Unknown"
	I0610 12:31:04.099775    8536 pod_ready.go:81] duration metric: took 405.4353ms for pod "kube-proxy-vw56h" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:04.099775    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300-m03" hosting pod "kube-proxy-vw56h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m03" has status "Ready":"Unknown"
	I0610 12:31:04.099775    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:04.283268    8536 request.go:629] Waited for 183.0245ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:31:04.283356    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:31:04.283356    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:04.283356    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:04.283356    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:04.287323    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:04.288299    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:04.288299    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:04.288398    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:04 GMT
	I0610 12:31:04.288514    8536 round_trippers.go:580]     Audit-Id: e0579cbf-42b1-4b5f-9bc6-47a5f77d894f
	I0610 12:31:04.288514    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:04.288514    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:04.288514    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:04.288514    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"1658","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0610 12:31:04.489460    8536 request.go:629] Waited for 200.2037ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:04.489460    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:04.489460    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:04.489460    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:04.489460    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:04.493865    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:04.493865    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:04.493927    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:04.493927    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:04.493927    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:04.493927    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:04.493927    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:04 GMT
	I0610 12:31:04.493927    8536 round_trippers.go:580]     Audit-Id: 758fdf9e-0468-42f1-b687-07d4951d7bfc
	I0610 12:31:04.494276    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:04.494326    8536 pod_ready.go:97] node "multinode-813300" hosting pod "kube-scheduler-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:04.494326    8536 pod_ready.go:81] duration metric: took 394.5484ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:04.494326    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "kube-scheduler-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:04.494326    8536 pod_ready.go:38] duration metric: took 1.6122687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:31:04.494326    8536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:31:04.517070    8536 command_runner.go:130] > -16
	I0610 12:31:04.517070    8536 ops.go:34] apiserver oom_adj: -16
	I0610 12:31:04.517070    8536 kubeadm.go:591] duration metric: took 14.3080415s to restartPrimaryControlPlane
	I0610 12:31:04.517070    8536 kubeadm.go:393] duration metric: took 14.3690238s to StartCluster
	I0610 12:31:04.517070    8536 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:31:04.517070    8536 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:31:04.519181    8536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:31:04.520611    8536 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.150.144 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 12:31:04.520611    8536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:31:04.529652    8536 out.go:177] * Verifying Kubernetes components...
	I0610 12:31:04.520984    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:31:04.534463    8536 out.go:177] * Enabled addons: 
	I0610 12:31:04.537036    8536 addons.go:510] duration metric: took 16.5027ms for enable addons: enabled=[]
	I0610 12:31:04.545277    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:31:04.828330    8536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:31:04.855843    8536 node_ready.go:35] waiting up to 6m0s for node "multinode-813300" to be "Ready" ...
	I0610 12:31:04.855843    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:04.855843    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:04.855843    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:04.855843    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:04.859860    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:04.859860    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:04.859860    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:04.860582    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:04 GMT
	I0610 12:31:04.860582    8536 round_trippers.go:580]     Audit-Id: 15e53225-994c-4023-a98c-d402e1c3231a
	I0610 12:31:04.860582    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:04.860582    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:04.860582    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:04.860871    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:05.370840    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:05.370840    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:05.370840    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:05.370840    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:05.375459    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:05.375995    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:05.376058    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:05.376058    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:05.376058    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:05 GMT
	I0610 12:31:05.376058    8536 round_trippers.go:580]     Audit-Id: 686b7362-0e64-4b0e-9fde-542a554fb89c
	I0610 12:31:05.376058    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:05.376109    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:05.376933    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:05.857080    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:05.857080    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:05.857080    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:05.857080    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:05.861996    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:05.861996    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:05.862192    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:05.862192    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:05 GMT
	I0610 12:31:05.862192    8536 round_trippers.go:580]     Audit-Id: 6319b36e-4baf-465d-8539-c68d257be543
	I0610 12:31:05.862192    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:05.862249    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:05.862249    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:05.862249    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:06.369456    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:06.369566    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:06.369592    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:06.369592    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:06.377335    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:31:06.377335    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:06.377335    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:06.377335    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:06.377335    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:06 GMT
	I0610 12:31:06.377335    8536 round_trippers.go:580]     Audit-Id: d47d764a-acd3-4949-b4f2-e427230cb069
	I0610 12:31:06.377335    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:06.377335    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:06.378326    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:06.862414    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:06.862414    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:06.862414    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:06.862414    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:06.869332    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:06.869630    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:06.869630    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:06.869630    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:06.869630    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:06.869630    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:06 GMT
	I0610 12:31:06.869630    8536 round_trippers.go:580]     Audit-Id: 2816d510-1ee4-4f86-aeaa-7aa02d72832e
	I0610 12:31:06.869630    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:06.869853    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:06.870312    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:07.364564    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:07.364564    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:07.364564    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:07.364564    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:07.375560    8536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 12:31:07.375560    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:07.376229    8536 round_trippers.go:580]     Audit-Id: 545dd30e-8a0f-429d-8819-d204a09fb4c9
	I0610 12:31:07.376229    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:07.376229    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:07.376229    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:07.376229    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:07.376229    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:07 GMT
	I0610 12:31:07.377998    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:07.865852    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:07.865852    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:07.865852    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:07.865852    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:07.868948    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:07.868948    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:07.868948    8536 round_trippers.go:580]     Audit-Id: 095f6c15-9744-4344-b32e-fcd499f64221
	I0610 12:31:07.868948    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:07.868948    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:07.868948    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:07.868948    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:07.868948    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:07 GMT
	I0610 12:31:07.870138    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:08.367045    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:08.367045    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:08.367149    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:08.367149    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:08.370607    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:08.370607    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:08.370607    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:08.370607    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:08.370607    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:08.370607    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:08 GMT
	I0610 12:31:08.370607    8536 round_trippers.go:580]     Audit-Id: 19823bba-3d76-4486-b4a2-424db46ae187
	I0610 12:31:08.370607    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:08.371532    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:08.869632    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:08.869632    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:08.869632    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:08.869632    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:08.873292    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:08.873292    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:08.874256    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:08.874256    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:08.874256    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:08.874256    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:08 GMT
	I0610 12:31:08.874256    8536 round_trippers.go:580]     Audit-Id: 00776131-78cc-409b-8180-7c752bda2b41
	I0610 12:31:08.874256    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:08.875012    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:08.875328    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:09.356174    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:09.356174    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:09.356174    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:09.356174    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:09.361145    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:09.361145    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:09.361145    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:09.361145    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:09.361145    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:09.361145    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:09 GMT
	I0610 12:31:09.361336    8536 round_trippers.go:580]     Audit-Id: 561a4994-96f1-4dc5-931f-3c662c5d48ad
	I0610 12:31:09.361336    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:09.362202    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:09.870079    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:09.870079    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:09.870079    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:09.870079    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:09.874658    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:09.874771    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:09.874823    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:09.874823    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:09.874823    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:09 GMT
	I0610 12:31:09.874823    8536 round_trippers.go:580]     Audit-Id: c7d43032-7422-4a33-bbd8-f40e797970da
	I0610 12:31:09.874823    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:09.874823    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:09.875070    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:10.371088    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:10.371143    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:10.371143    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:10.371143    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:10.381713    8536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 12:31:10.381713    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:10.381713    8536 round_trippers.go:580]     Audit-Id: 6b620a72-4b17-4a68-8a6c-178a71c44b69
	I0610 12:31:10.381713    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:10.381713    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:10.381713    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:10.381713    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:10.381713    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:10 GMT
	I0610 12:31:10.381713    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:10.857403    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:10.857403    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:10.857403    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:10.857498    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:10.861794    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:10.861794    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:10.861794    8536 round_trippers.go:580]     Audit-Id: 1114c439-c7d9-4c50-9978-9b78ad5a4366
	I0610 12:31:10.862125    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:10.862125    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:10.862125    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:10.862125    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:10.862125    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:10 GMT
	I0610 12:31:10.862649    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:11.357205    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:11.357455    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:11.357455    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:11.357455    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:11.362606    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:11.362906    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:11.362906    8536 round_trippers.go:580]     Audit-Id: b5896b7c-0c01-45c2-ae50-333382781561
	I0610 12:31:11.362906    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:11.362906    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:11.362906    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:11.362906    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:11.362906    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:11 GMT
	I0610 12:31:11.363362    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:11.363967    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:11.870385    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:11.870385    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:11.870385    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:11.870385    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:11.874411    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:11.874624    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:11.874624    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:11 GMT
	I0610 12:31:11.874624    8536 round_trippers.go:580]     Audit-Id: 75974c77-7c9c-42f8-a12a-1c4062356981
	I0610 12:31:11.874624    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:11.874624    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:11.874624    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:11.874624    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:11.875048    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:12.363248    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:12.363326    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:12.363326    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:12.363326    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:12.368022    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:12.368022    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:12.368704    8536 round_trippers.go:580]     Audit-Id: 20dfe8b4-e0fc-464e-a67e-9378ef8bc30c
	I0610 12:31:12.368704    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:12.368704    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:12.368704    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:12.368704    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:12.368704    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:12 GMT
	I0610 12:31:12.369241    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:12.857523    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:12.857523    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:12.857523    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:12.857523    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:12.861079    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:12.862093    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:12.862160    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:12.862160    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:12.862160    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:12.862160    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:12 GMT
	I0610 12:31:12.862160    8536 round_trippers.go:580]     Audit-Id: 27a2a33c-0f97-433e-8b7e-ac51a74032fd
	I0610 12:31:12.862160    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:12.862268    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:13.359103    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:13.359161    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:13.359234    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:13.359234    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:13.363609    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:13.363609    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:13.363609    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:13.363609    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:13.363609    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:13.363609    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:13 GMT
	I0610 12:31:13.363609    8536 round_trippers.go:580]     Audit-Id: 1c38016d-8886-47b0-bb3e-f3f7ab72deee
	I0610 12:31:13.363711    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:13.363979    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:13.364503    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:13.856567    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:13.856567    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:13.856567    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:13.856567    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:13.860142    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:13.860551    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:13.860551    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:13.860551    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:13.860551    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:13.860551    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:13 GMT
	I0610 12:31:13.860551    8536 round_trippers.go:580]     Audit-Id: 0014ddf1-0515-41cc-9499-05608e150ddf
	I0610 12:31:13.860551    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:13.860851    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:14.357777    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:14.357777    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:14.357777    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:14.357777    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:14.366354    8536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:31:14.366354    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:14.366354    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:14.366354    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:14.366354    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:14.366354    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:14 GMT
	I0610 12:31:14.366354    8536 round_trippers.go:580]     Audit-Id: 852a7f36-43e9-487c-a927-6cecde986e56
	I0610 12:31:14.366354    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:14.367125    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:14.856870    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:14.856870    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:14.856870    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:14.856870    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:14.860638    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:14.860638    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:14.860638    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:14 GMT
	I0610 12:31:14.860638    8536 round_trippers.go:580]     Audit-Id: 11acd36a-af8f-4553-951e-85ca9ea63563
	I0610 12:31:14.860638    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:14.860638    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:14.860638    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:14.860638    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:14.861309    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:15.358101    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:15.358101    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:15.358101    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:15.358101    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:15.362673    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:15.362673    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:15.362673    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:15 GMT
	I0610 12:31:15.362673    8536 round_trippers.go:580]     Audit-Id: 8e5f4822-4835-42e5-ab07-f395f11247af
	I0610 12:31:15.362673    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:15.362673    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:15.362673    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:15.362673    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:15.363384    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:15.860012    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:15.860110    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:15.860110    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:15.860110    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:15.864256    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:15.864486    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:15.864486    8536 round_trippers.go:580]     Audit-Id: 982f2ca7-d31b-4059-9b88-021b2bc81b79
	I0610 12:31:15.864486    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:15.864486    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:15.864486    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:15.864486    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:15.864486    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:15 GMT
	I0610 12:31:15.864607    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:15.865087    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:16.361273    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:16.361273    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:16.361273    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:16.361273    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:16.364878    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:16.364878    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:16.364878    8536 round_trippers.go:580]     Audit-Id: fb7a1628-f5cc-4802-97c1-e80408edc392
	I0610 12:31:16.364878    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:16.364878    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:16.364878    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:16.365482    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:16.365482    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:16 GMT
	I0610 12:31:16.365790    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:16.863371    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:16.863371    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:16.863371    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:16.863371    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:16.868740    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:16.868740    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:16.868740    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:16.868740    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:16.868740    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:16.868740    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:16.868740    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:16 GMT
	I0610 12:31:16.868740    8536 round_trippers.go:580]     Audit-Id: f56f538b-0d99-4511-a747-70f09906dd49
	I0610 12:31:16.868740    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:17.361283    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:17.361283    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:17.361283    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:17.361283    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:17.365875    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:17.365875    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:17.366333    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:17.366333    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:17 GMT
	I0610 12:31:17.366333    8536 round_trippers.go:580]     Audit-Id: 09c6beed-7ae1-47c8-8353-4b9e566178be
	I0610 12:31:17.366333    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:17.366333    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:17.366333    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:17.367455    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:17.865450    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:17.865450    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:17.865450    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:17.865450    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:17.871557    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:17.871557    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:17.871557    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:17 GMT
	I0610 12:31:17.871557    8536 round_trippers.go:580]     Audit-Id: 74d7db3e-e1f8-4692-aa8a-208592cf3f0e
	I0610 12:31:17.871557    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:17.871557    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:17.871557    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:17.871557    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:17.872270    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:17.872305    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:18.361564    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:18.361775    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:18.361775    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:18.361775    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:18.367381    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:18.367447    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:18.367447    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:18.367447    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:18.367447    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:18 GMT
	I0610 12:31:18.367447    8536 round_trippers.go:580]     Audit-Id: e43159e7-219d-4a2a-8109-c6df286f0526
	I0610 12:31:18.367447    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:18.367447    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:18.368556    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:18.860165    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:18.860165    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:18.860165    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:18.860165    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:18.864741    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:18.865168    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:18.865168    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:18.865168    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:18.865168    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:18.865237    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:18.865237    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:18 GMT
	I0610 12:31:18.865237    8536 round_trippers.go:580]     Audit-Id: 31193f4f-5a0e-441f-920b-a3a715a135fb
	I0610 12:31:18.866131    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:19.360158    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:19.360158    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:19.360158    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:19.360158    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:19.364731    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:19.364731    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:19.364821    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:19.364821    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:19.364821    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:19.364821    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:19.364821    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:19 GMT
	I0610 12:31:19.364821    8536 round_trippers.go:580]     Audit-Id: 2c11c231-226f-4645-82b9-fd7ad7caebf2
	I0610 12:31:19.365714    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:19.862501    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:19.862716    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:19.862716    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:19.862716    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:19.866342    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:19.866960    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:19.866960    8536 round_trippers.go:580]     Audit-Id: dd35cafe-f16c-4734-a401-3c5ae758eb58
	I0610 12:31:19.866960    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:19.866960    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:19.866960    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:19.866960    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:19.866960    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:19 GMT
	I0610 12:31:19.867315    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:20.364292    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:20.364396    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:20.364396    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:20.364396    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:20.368533    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:20.368533    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:20.368533    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:20.368533    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:20.368533    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:20.368533    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:20.368910    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:20 GMT
	I0610 12:31:20.368910    8536 round_trippers.go:580]     Audit-Id: 7f6d2e80-1ea2-4d43-8ca8-70ff1be9c559
	I0610 12:31:20.369342    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:20.369871    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:20.862384    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:20.862384    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:20.862384    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:20.862384    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:20.866116    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:20.866116    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:20.866116    8536 round_trippers.go:580]     Audit-Id: 4926f0b2-c8f5-4e50-bbaa-dbd8b94d5ed6
	I0610 12:31:20.866116    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:20.866116    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:20.866116    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:20.866116    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:20.866116    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:20 GMT
	I0610 12:31:20.866116    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:21.361116    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:21.361116    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:21.361179    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:21.361179    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:21.365288    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:21.365288    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:21.365388    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:21.365388    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:21.365388    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:21.365388    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:21.365388    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:21 GMT
	I0610 12:31:21.365388    8536 round_trippers.go:580]     Audit-Id: 844bac56-5571-44e0-b7da-3f06f20be76d
	I0610 12:31:21.365857    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:21.857464    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:21.857464    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:21.857464    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:21.857464    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:21.861068    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:21.861068    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:21.861068    8536 round_trippers.go:580]     Audit-Id: 1e325dcb-6413-4a8b-9579-f2ccb7f6d5d3
	I0610 12:31:21.861068    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:21.861068    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:21.861068    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:21.861068    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:21.861068    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:21 GMT
	I0610 12:31:21.861878    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:22.371789    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:22.371883    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:22.371883    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:22.371883    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:22.378186    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:22.378186    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:22.378186    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:22.378186    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:22.378725    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:22.378725    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:22 GMT
	I0610 12:31:22.378725    8536 round_trippers.go:580]     Audit-Id: d7f96531-7830-4482-be7a-9f13e50dd6fb
	I0610 12:31:22.378725    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:22.383890    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:22.384923    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:22.869533    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:22.869693    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:22.869693    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:22.869693    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:22.873138    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:22.873138    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:22.874083    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:22.874107    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:22.874107    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:22 GMT
	I0610 12:31:22.874107    8536 round_trippers.go:580]     Audit-Id: 7c2fdb38-2fee-4ba4-9a69-fecb00a22a0c
	I0610 12:31:22.874107    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:22.874107    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:22.874241    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:23.371629    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:23.371629    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:23.371629    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:23.371629    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:23.378268    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:23.378268    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:23.378268    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:23 GMT
	I0610 12:31:23.378268    8536 round_trippers.go:580]     Audit-Id: 03923fa6-4cfb-4283-a98f-c76fb80bd4b3
	I0610 12:31:23.378268    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:23.378268    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:23.378268    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:23.379202    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:23.379656    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:23.871472    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:23.871472    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:23.871562    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:23.871562    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:23.878517    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:23.878517    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:23.878517    8536 round_trippers.go:580]     Audit-Id: c179c4dc-fc40-42f9-b3e3-9ccbdb015122
	I0610 12:31:23.878517    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:23.878517    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:23.878517    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:23.878517    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:23.878517    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:23 GMT
	I0610 12:31:23.879220    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:24.358793    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:24.358863    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:24.358863    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:24.358863    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:24.362851    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:24.362851    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:24.362851    8536 round_trippers.go:580]     Audit-Id: ca360c92-f767-4e55-a419-0552a7369626
	I0610 12:31:24.362851    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:24.362851    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:24.362851    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:24.362851    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:24.362851    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:24 GMT
	I0610 12:31:24.363528    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:24.856948    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:24.857005    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:24.857005    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:24.857005    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:24.860751    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:24.860751    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:24.860751    8536 round_trippers.go:580]     Audit-Id: 73aa6dca-a1f5-44a3-ba5e-3f70265c9c2e
	I0610 12:31:24.860751    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:24.860751    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:24.861170    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:24.861170    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:24.861170    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:24 GMT
	I0610 12:31:24.861441    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:24.862211    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:25.357660    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:25.357660    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:25.357907    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:25.357907    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:25.363539    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:25.363539    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:25.363539    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:25 GMT
	I0610 12:31:25.363539    8536 round_trippers.go:580]     Audit-Id: 903e0c48-0e77-42ec-b9d1-0baac86ecee2
	I0610 12:31:25.363539    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:25.363539    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:25.363539    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:25.363771    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:25.363941    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:25.870965    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:25.870965    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:25.870965    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:25.870965    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:25.877316    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:25.877874    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:25.877874    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:25.877874    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:25 GMT
	I0610 12:31:25.877874    8536 round_trippers.go:580]     Audit-Id: a7c53500-9ebe-40ed-93a5-77bf3f264e49
	I0610 12:31:25.877874    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:25.877874    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:25.877874    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:25.877874    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:26.370926    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:26.371005    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:26.371067    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:26.371067    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:26.382252    8536 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 12:31:26.383034    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:26.383034    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:26.383034    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:26 GMT
	I0610 12:31:26.383034    8536 round_trippers.go:580]     Audit-Id: 7c47f2b3-a9ea-4689-b7c0-e6a77dee8e6d
	I0610 12:31:26.383034    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:26.383034    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:26.383034    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:26.383422    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:26.857035    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:26.857303    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:26.857303    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:26.857303    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:26.861139    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:26.861139    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:26.861139    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:26 GMT
	I0610 12:31:26.861139    8536 round_trippers.go:580]     Audit-Id: cd58463f-2d1b-483a-9aef-81d962de2284
	I0610 12:31:26.861139    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:26.862023    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:26.862023    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:26.862023    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:26.862223    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:26.862459    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:27.368131    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:27.368384    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:27.368384    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:27.368384    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:27.375233    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:27.375233    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:27.375569    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:27.375569    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:27.375569    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:27.375569    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:27.375569    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:27 GMT
	I0610 12:31:27.375569    8536 round_trippers.go:580]     Audit-Id: c685f387-2006-41de-87d1-2fac2d364dfb
	I0610 12:31:27.375779    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:27.868617    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:27.868617    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:27.868617    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:27.868617    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:27.875205    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:27.875205    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:27.875205    8536 round_trippers.go:580]     Audit-Id: 11154572-8125-449d-aad7-14285bc484fd
	I0610 12:31:27.875264    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:27.875264    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:27.875289    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:27.875289    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:27.875289    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:27 GMT
	I0610 12:31:27.876540    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:28.370393    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:28.370393    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:28.370393    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:28.370393    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:28.373912    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:28.373912    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:28.373912    8536 round_trippers.go:580]     Audit-Id: 272faba4-e44c-4f90-8e7b-014fe09c18ef
	I0610 12:31:28.373912    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:28.373912    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:28.374248    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:28.374248    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:28.374248    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:28 GMT
	I0610 12:31:28.374548    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:28.868158    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:28.868334    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:28.868334    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:28.868334    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:28.872115    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:28.872782    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:28.872782    8536 round_trippers.go:580]     Audit-Id: 5d0f4677-7a25-46c6-a02d-b4674b84bae7
	I0610 12:31:28.872782    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:28.872782    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:28.872782    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:28.872782    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:28.872782    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:28 GMT
	I0610 12:31:28.873177    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:28.873762    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:29.366774    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:29.366869    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:29.366869    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:29.366937    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:29.370058    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:29.370999    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:29.370999    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:29.371050    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:29.371050    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:29.371050    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:29 GMT
	I0610 12:31:29.371050    8536 round_trippers.go:580]     Audit-Id: e3260a1a-2bf0-4a47-b2db-3cbdb7c0fb4b
	I0610 12:31:29.371050    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:29.372056    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:29.864441    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:29.864494    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:29.864549    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:29.864549    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:29.869032    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:29.869085    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:29.869085    8536 round_trippers.go:580]     Audit-Id: f4b8d086-ac92-4e34-9863-da569cfb7415
	I0610 12:31:29.869085    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:29.869085    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:29.869085    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:29.869085    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:29.869085    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:29 GMT
	I0610 12:31:29.869085    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:30.364844    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:30.364844    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:30.364844    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:30.364844    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:30.369311    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:30.369311    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:30.369311    8536 round_trippers.go:580]     Audit-Id: 7a3d5314-3749-46e6-8736-40338ea99b68
	I0610 12:31:30.369311    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:30.369311    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:30.369311    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:30.369311    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:30.369311    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:30 GMT
	I0610 12:31:30.369532    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:30.865549    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:30.865603    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:30.865603    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:30.865603    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:30.871866    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:30.871866    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:30.871866    8536 round_trippers.go:580]     Audit-Id: db76d763-8ba1-4c5b-a42e-2cecd3c0c3db
	I0610 12:31:30.871866    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:30.871866    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:30.871866    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:30.871866    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:30.871866    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:30 GMT
	I0610 12:31:30.872104    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:31.365725    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:31.365796    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:31.365796    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:31.365796    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:31.370234    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:31.370946    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:31.370946    8536 round_trippers.go:580]     Audit-Id: 5383b6e7-d074-4fd4-9db9-4a2e887f2722
	I0610 12:31:31.370946    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:31.370946    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:31.370946    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:31.370946    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:31.370946    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:31 GMT
	I0610 12:31:31.371514    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:31.372138    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:31.862468    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:31.862548    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:31.862548    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:31.862548    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:31.867774    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:31.868107    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:31.868107    8536 round_trippers.go:580]     Audit-Id: a64d1ee7-c0f1-48ea-b59f-a5665fb7089e
	I0610 12:31:31.868107    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:31.868107    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:31.868107    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:31.868107    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:31.868107    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:31 GMT
	I0610 12:31:31.868336    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:32.363307    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:32.363307    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:32.363307    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:32.363307    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:32.367136    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:32.367761    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:32.367761    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:32.367761    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:32.367761    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:32 GMT
	I0610 12:31:32.367761    8536 round_trippers.go:580]     Audit-Id: c8c5eb2a-f110-46da-b0d6-6475e680fc96
	I0610 12:31:32.367761    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:32.367761    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:32.368160    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:32.863033    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:32.863033    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:32.863033    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:32.863033    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:32.866628    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:32.866628    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:32.866628    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:32.866628    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:32 GMT
	I0610 12:31:32.866628    8536 round_trippers.go:580]     Audit-Id: 9fda6572-778c-4a60-a9a1-562a5b61a5e1
	I0610 12:31:32.867340    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:32.867340    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:32.867340    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:32.867512    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:33.362823    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:33.362823    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:33.362823    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:33.362823    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:33.367401    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:33.367644    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:33.367644    8536 round_trippers.go:580]     Audit-Id: 4556772d-63da-4957-8079-37f6421f63ad
	I0610 12:31:33.367644    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:33.367644    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:33.367644    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:33.367644    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:33.367644    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:33 GMT
	I0610 12:31:33.368417    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:33.865974    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:33.866169    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:33.866169    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:33.866169    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:33.868965    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:33.868965    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:33.869734    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:33.869734    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:33.869734    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:33.869734    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:33 GMT
	I0610 12:31:33.869734    8536 round_trippers.go:580]     Audit-Id: edc9d0d1-4814-48f6-9056-9ab14ef05667
	I0610 12:31:33.869734    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:33.870569    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:33.870569    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:34.367368    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:34.367368    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:34.367443    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:34.367443    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:34.374281    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:34.374281    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:34.375217    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:34 GMT
	I0610 12:31:34.375217    8536 round_trippers.go:580]     Audit-Id: 123712e4-511d-4bd4-ba72-cb1ae25a05ef
	I0610 12:31:34.375217    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:34.375217    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:34.375217    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:34.375217    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:34.375708    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:34.866882    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:34.866882    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:34.866882    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:34.866882    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:34.872362    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:34.872484    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:34.872484    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:34.872484    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:34.872484    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:34.872484    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:34 GMT
	I0610 12:31:34.872484    8536 round_trippers.go:580]     Audit-Id: 4201143c-f970-42b2-88a2-b0b877cdef02
	I0610 12:31:34.872484    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:34.872943    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:35.364571    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:35.364571    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:35.364571    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:35.364571    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:35.368215    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:35.368215    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:35.368215    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:35.368215    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:35.368215    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:35 GMT
	I0610 12:31:35.368215    8536 round_trippers.go:580]     Audit-Id: fd09e71b-269c-438a-807d-9a432cee024d
	I0610 12:31:35.368215    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:35.368215    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:35.368215    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:35.867160    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:35.867260    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:35.867330    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:35.867330    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:35.870675    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:35.870675    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:35.870675    8536 round_trippers.go:580]     Audit-Id: 432b1fa0-2b23-4016-a3e0-d9b2b3406905
	I0610 12:31:35.870675    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:35.870675    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:35.871419    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:35.871419    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:35.871419    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:35 GMT
	I0610 12:31:35.871570    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:35.871960    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:36.365961    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:36.365961    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:36.365961    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:36.365961    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:36.372184    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:36.372184    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:36.372184    8536 round_trippers.go:580]     Audit-Id: 9beb8238-9c6e-440c-921a-bc94e4ed2f4b
	I0610 12:31:36.372184    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:36.372184    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:36.372184    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:36.372184    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:36.372184    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:36 GMT
	I0610 12:31:36.372863    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:36.863959    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:36.863959    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:36.863959    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:36.863959    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:36.869673    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:36.869673    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:36.869673    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:36.869673    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:36.869673    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:36.870193    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:36 GMT
	I0610 12:31:36.870193    8536 round_trippers.go:580]     Audit-Id: 47650ea9-8114-4579-968f-197458528810
	I0610 12:31:36.870193    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:36.870307    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:37.363447    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:37.363521    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:37.363593    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:37.363593    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:37.367192    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:37.367192    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:37.367192    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:37.367192    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:37.367192    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:37.367192    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:37.367192    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:37 GMT
	I0610 12:31:37.367192    8536 round_trippers.go:580]     Audit-Id: 6961d302-9957-4018-be0a-c46fbe7c037b
	I0610 12:31:37.369113    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:37.861980    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:37.861980    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:37.861980    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:37.861980    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:37.865953    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:37.865953    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:37.866359    8536 round_trippers.go:580]     Audit-Id: abcbec62-8a5f-4b81-b57d-19ecc2155e36
	I0610 12:31:37.866359    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:37.866359    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:37.866359    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:37.866359    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:37.866359    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:37 GMT
	I0610 12:31:37.866873    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:38.362461    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:38.362461    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:38.362570    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:38.362570    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:38.368350    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:38.368350    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:38.368350    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:38.368350    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:38.368350    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:38.368350    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:38 GMT
	I0610 12:31:38.368350    8536 round_trippers.go:580]     Audit-Id: e186a4a0-b6d7-490a-b85f-d736f69651b5
	I0610 12:31:38.368350    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:38.369387    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:38.369793    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:38.860093    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:38.860384    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:38.860384    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:38.860384    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:38.863760    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:38.864796    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:38.864796    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:38.864796    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:38.864796    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:38.864796    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:38.864796    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:38 GMT
	I0610 12:31:38.864796    8536 round_trippers.go:580]     Audit-Id: 77978789-9ca8-42cc-b693-c333d51015d1
	I0610 12:31:38.865448    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:39.362259    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:39.362259    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:39.362259    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:39.362259    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:39.365816    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:39.365816    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:39.365816    8536 round_trippers.go:580]     Audit-Id: 0a193f20-5eb2-4830-8193-e296cd820111
	I0610 12:31:39.365816    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:39.365816    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:39.365816    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:39.366260    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:39.366260    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:39 GMT
	I0610 12:31:39.366817    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:39.866472    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:39.866472    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:39.866472    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:39.866472    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:39.871011    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:39.871276    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:39.871276    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:39 GMT
	I0610 12:31:39.871276    8536 round_trippers.go:580]     Audit-Id: d06249ae-59f0-45a4-bdc8-2acbc8cb5fc9
	I0610 12:31:39.871276    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:39.871276    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:39.871356    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:39.871356    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:39.871457    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:40.363517    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:40.363517    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.363517    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.363517    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.367813    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:40.367813    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.367813    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.367813    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.367813    8536 round_trippers.go:580]     Audit-Id: cd04bfbe-a408-452d-85b5-c0383bc8bdf9
	I0610 12:31:40.367813    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.367813    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.367813    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.368351    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:40.861947    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:40.861947    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.861947    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.861947    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.865536    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:40.865536    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.865536    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.865727    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.865727    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.865727    8536 round_trippers.go:580]     Audit-Id: 06c81486-729e-4672-bb6e-374279bb4a68
	I0610 12:31:40.865727    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.865727    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.866092    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:40.866615    8536 node_ready.go:49] node "multinode-813300" has status "Ready":"True"
	I0610 12:31:40.866818    8536 node_ready.go:38] duration metric: took 36.0105676s for node "multinode-813300" to be "Ready" ...
	I0610 12:31:40.866881    8536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:31:40.866934    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:31:40.866934    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.866934    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.866934    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.875248    8536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:31:40.875248    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.875248    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.875248    8536 round_trippers.go:580]     Audit-Id: b1bedf74-b1ba-4268-a91b-3a1af5810495
	I0610 12:31:40.875248    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.875248    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.875248    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.875248    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.878632    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87076 chars]
	I0610 12:31:40.882999    8536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:40.883208    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:40.883235    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.883235    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.883235    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.886949    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:40.887025    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.887025    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.887025    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.887025    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.887025    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.887025    8536 round_trippers.go:580]     Audit-Id: cebd202a-d1d2-436f-99a6-9a39287f2ada
	I0610 12:31:40.887025    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.887025    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:40.887896    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:40.887956    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.887956    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.887956    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.890666    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:40.890666    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.890666    8536 round_trippers.go:580]     Audit-Id: 71a06392-2b67-437d-905f-c7f4eca8b615
	I0610 12:31:40.890666    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.890666    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.890666    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.890666    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.890666    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.891651    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:41.393965    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:41.393965    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:41.393965    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:41.393965    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:41.397551    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:41.398038    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:41.398038    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:41.398038    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:41.398038    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:41.398038    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:41 GMT
	I0610 12:31:41.398038    8536 round_trippers.go:580]     Audit-Id: 20167246-01ae-4ed9-b9c4-4ec95a01520a
	I0610 12:31:41.398038    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:41.398434    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:41.399353    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:41.399353    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:41.399353    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:41.399353    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:41.402153    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:41.402153    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:41.402153    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:41.402153    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:41 GMT
	I0610 12:31:41.402153    8536 round_trippers.go:580]     Audit-Id: 9e11eaa6-4f17-4e96-a9cf-f3622de977a1
	I0610 12:31:41.402153    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:41.402153    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:41.402153    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:41.402901    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:41.897170    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:41.897170    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:41.897170    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:41.897170    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:41.901739    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:41.901739    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:41.901739    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:41.901739    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:41.901739    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:41 GMT
	I0610 12:31:41.901739    8536 round_trippers.go:580]     Audit-Id: a23b3e22-69d4-41a6-9b06-261613d1f649
	I0610 12:31:41.901739    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:41.901739    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:41.902169    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:41.903133    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:41.903192    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:41.903192    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:41.903192    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:41.906015    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:41.906015    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:41.906314    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:41.906314    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:41.906314    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:41.906314    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:41.906314    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:41 GMT
	I0610 12:31:41.906314    8536 round_trippers.go:580]     Audit-Id: 8de8b9a1-307d-42b7-884b-da33c34ab51b
	I0610 12:31:41.907189    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:42.395474    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:42.395546    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:42.395546    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:42.395546    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:42.399757    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:42.399757    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:42.399757    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:42.399757    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:42 GMT
	I0610 12:31:42.399757    8536 round_trippers.go:580]     Audit-Id: c5185f0a-e8e3-4b12-9005-aa3cd8edd003
	I0610 12:31:42.399860    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:42.399860    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:42.399860    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:42.400186    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:42.400309    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:42.400309    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:42.400309    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:42.400309    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:42.407948    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:31:42.408282    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:42.408282    8536 round_trippers.go:580]     Audit-Id: 77444e99-e37a-4004-9f28-64e7914d61d4
	I0610 12:31:42.408282    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:42.408282    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:42.408282    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:42.408282    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:42.408382    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:42 GMT
	I0610 12:31:42.408495    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:42.893615    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:42.893615    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:42.893615    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:42.893615    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:42.898185    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:42.898185    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:42.898292    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:42.898292    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:42.898292    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:42 GMT
	I0610 12:31:42.898292    8536 round_trippers.go:580]     Audit-Id: 47cc9d81-bb02-41a4-919e-770d96c4e6c6
	I0610 12:31:42.898292    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:42.898292    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:42.898500    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:42.898956    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:42.898956    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:42.898956    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:42.898956    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:42.904847    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:42.904847    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:42.904847    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:42.904847    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:42.904847    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:42 GMT
	I0610 12:31:42.904847    8536 round_trippers.go:580]     Audit-Id: 52aeba0c-2f4b-40e0-ab4f-04dcab962a1d
	I0610 12:31:42.904847    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:42.904847    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:42.905419    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:42.905636    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:43.384860    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:43.384860    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:43.384860    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:43.384860    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:43.389324    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:43.389324    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:43.390007    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:43.390007    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:43.390007    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:43 GMT
	I0610 12:31:43.390007    8536 round_trippers.go:580]     Audit-Id: f7c02a1c-b43c-4518-ab58-479c7a00e238
	I0610 12:31:43.390007    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:43.390007    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:43.390257    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:43.391354    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:43.391449    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:43.391449    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:43.391543    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:43.394300    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:43.394300    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:43.394300    8536 round_trippers.go:580]     Audit-Id: 3d5fb550-6b08-4aba-bffa-b2606ba0cca1
	I0610 12:31:43.394300    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:43.394300    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:43.394300    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:43.394300    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:43.394300    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:43 GMT
	I0610 12:31:43.394954    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:43.887158    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:43.887158    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:43.887158    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:43.887158    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:43.891745    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:43.891745    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:43.891745    8536 round_trippers.go:580]     Audit-Id: 26bc5778-7a13-471e-9eb8-4268deafe95b
	I0610 12:31:43.891745    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:43.892426    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:43.892426    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:43.892426    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:43.892426    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:43 GMT
	I0610 12:31:43.892787    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:43.893428    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:43.893428    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:43.893428    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:43.893428    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:43.897008    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:43.897008    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:43.897008    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:43.897008    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:43.897008    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:43 GMT
	I0610 12:31:43.897008    8536 round_trippers.go:580]     Audit-Id: ff5b4449-7f45-4706-a063-dfb41bfed254
	I0610 12:31:43.897008    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:43.897008    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:43.897008    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:44.388019    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:44.388019    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:44.388019    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:44.388019    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:44.394092    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:44.394092    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:44.394092    8536 round_trippers.go:580]     Audit-Id: 887b662d-0b46-421d-a490-942070f93ce2
	I0610 12:31:44.394092    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:44.394092    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:44.394092    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:44.394092    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:44.394092    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:44 GMT
	I0610 12:31:44.394092    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:44.394092    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:44.394092    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:44.394092    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:44.394092    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:44.400683    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:44.400683    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:44.400683    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:44.400683    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:44.400683    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:44.400683    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:44 GMT
	I0610 12:31:44.400683    8536 round_trippers.go:580]     Audit-Id: 158d71bf-c09b-4b2a-96c9-085357dbef27
	I0610 12:31:44.400683    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:44.401461    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:44.893190    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:44.893413    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:44.893413    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:44.893413    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:44.897627    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:44.898298    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:44.898298    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:44.898298    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:44.898395    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:44.898395    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:44 GMT
	I0610 12:31:44.898395    8536 round_trippers.go:580]     Audit-Id: 70a7f404-fc27-4843-bc6d-fb64e68b1325
	I0610 12:31:44.898395    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:44.898655    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:44.899724    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:44.899724    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:44.899724    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:44.899724    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:44.903306    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:44.903306    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:44.903306    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:44.903306    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:44 GMT
	I0610 12:31:44.903306    8536 round_trippers.go:580]     Audit-Id: 0c43e73e-df07-42d1-af92-3118f88b0e14
	I0610 12:31:44.903306    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:44.903306    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:44.903306    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:44.903849    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:45.390656    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:45.390867    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:45.390867    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:45.390867    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:45.394685    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:45.394742    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:45.394742    8536 round_trippers.go:580]     Audit-Id: 36b1e8b7-70e0-4424-a52a-def4c4868689
	I0610 12:31:45.394742    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:45.394742    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:45.394742    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:45.394742    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:45.394742    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:45 GMT
	I0610 12:31:45.395720    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:45.396312    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:45.396312    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:45.396312    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:45.396312    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:45.400685    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:45.400685    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:45.400685    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:45.400685    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:45.400685    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:45.400685    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:45 GMT
	I0610 12:31:45.400685    8536 round_trippers.go:580]     Audit-Id: 4f0bd21f-02cb-48a8-a6eb-a791e8a0cd6d
	I0610 12:31:45.400685    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:45.401166    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:45.401360    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:45.890618    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:45.890618    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:45.890618    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:45.890618    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:45.894200    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:45.894200    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:45.894200    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:45.894709    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:45.894709    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:45 GMT
	I0610 12:31:45.894709    8536 round_trippers.go:580]     Audit-Id: e347400d-206c-469f-a710-8f9c73b3329d
	I0610 12:31:45.894709    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:45.894709    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:45.894873    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:45.895757    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:45.895812    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:45.895812    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:45.895812    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:45.910510    8536 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 12:31:45.910510    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:45.910688    8536 round_trippers.go:580]     Audit-Id: f2bed27c-1c92-4323-8df6-3daf6c1c93a1
	I0610 12:31:45.910688    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:45.910688    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:45.910688    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:45.910688    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:45.910688    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:45 GMT
	I0610 12:31:45.911511    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:46.388409    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:46.388409    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:46.388409    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:46.388409    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:46.391453    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:46.391453    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:46.391453    8536 round_trippers.go:580]     Audit-Id: 74a99990-14af-4987-97ba-10c0f95b22b4
	I0610 12:31:46.391453    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:46.391453    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:46.391453    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:46.391453    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:46.392476    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:46 GMT
	I0610 12:31:46.392738    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:46.393349    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:46.393349    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:46.393349    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:46.393349    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:46.396686    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:46.396949    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:46.396949    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:46.396949    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:46.396949    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:46 GMT
	I0610 12:31:46.397050    8536 round_trippers.go:580]     Audit-Id: 477c0a89-a160-4064-bb20-c0b778b541c5
	I0610 12:31:46.397050    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:46.397050    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:46.397119    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:46.890760    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:46.890836    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:46.890836    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:46.890836    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:46.894588    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:46.894588    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:46.895452    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:46.895452    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:46 GMT
	I0610 12:31:46.895452    8536 round_trippers.go:580]     Audit-Id: d394f82c-0010-4b63-8a37-19b12886ab57
	I0610 12:31:46.895452    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:46.895452    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:46.895452    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:46.895688    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:46.896530    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:46.896530    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:46.896530    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:46.896530    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:46.898907    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:46.898907    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:46.898907    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:46 GMT
	I0610 12:31:46.898907    8536 round_trippers.go:580]     Audit-Id: 358971b7-94bb-408d-b4c1-0b03694cc3c3
	I0610 12:31:46.898907    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:46.898907    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:46.899482    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:46.899482    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:46.899625    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:47.397642    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:47.397642    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:47.397642    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:47.397642    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:47.402232    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:47.402347    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:47.402417    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:47.402417    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:47.402417    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:47.402417    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:47.402417    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:47 GMT
	I0610 12:31:47.402417    8536 round_trippers.go:580]     Audit-Id: 8667f2ff-135a-4bb0-a6a2-c1bfff9825ee
	I0610 12:31:47.403339    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:47.404288    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:47.404288    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:47.404288    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:47.404288    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:47.407761    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:47.407761    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:47.407761    8536 round_trippers.go:580]     Audit-Id: 1a3c4bbc-1cef-4f24-9267-eba40737a3b8
	I0610 12:31:47.407761    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:47.407761    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:47.407844    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:47.407844    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:47.407844    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:47 GMT
	I0610 12:31:47.407950    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:47.408630    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:47.888812    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:47.888812    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:47.888812    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:47.888812    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:47.893703    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:47.893774    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:47.893774    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:47.893774    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:47 GMT
	I0610 12:31:47.893774    8536 round_trippers.go:580]     Audit-Id: 938807f2-7b80-4b77-92d9-89082d06391c
	I0610 12:31:47.893774    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:47.893845    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:47.893845    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:47.893907    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:47.893907    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:47.893907    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:47.893907    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:47.893907    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:47.903722    8536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 12:31:47.903722    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:47.903722    8536 round_trippers.go:580]     Audit-Id: 9ba33a89-6eb7-4fe2-8e6a-a148dd324aaa
	I0610 12:31:47.903722    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:47.903722    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:47.903722    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:47.903722    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:47.903722    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:47 GMT
	I0610 12:31:47.904513    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:48.388872    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:48.388925    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:48.388966    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:48.388966    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:48.392340    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:48.393467    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:48.393467    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:48.393467    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:48.393467    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:48.393467    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:48.393467    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:48 GMT
	I0610 12:31:48.393467    8536 round_trippers.go:580]     Audit-Id: 353e44e6-6cee-41a8-9e15-b23addccfa7a
	I0610 12:31:48.393744    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:48.394529    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:48.394585    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:48.394585    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:48.394585    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:48.397058    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:48.397058    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:48.397058    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:48.397058    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:48 GMT
	I0610 12:31:48.397058    8536 round_trippers.go:580]     Audit-Id: 392c6711-5b3e-460e-bba0-fdbb6c09b0bf
	I0610 12:31:48.397058    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:48.397058    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:48.397462    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:48.397844    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:48.888609    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:48.888692    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:48.888692    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:48.888692    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:48.893206    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:48.893206    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:48.893206    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:48.893206    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:48.893206    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:48 GMT
	I0610 12:31:48.893547    8536 round_trippers.go:580]     Audit-Id: cb7eaeaf-54cd-43c5-beae-4890772513a3
	I0610 12:31:48.893547    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:48.893547    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:48.893766    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:48.894529    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:48.894529    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:48.894529    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:48.894529    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:48.897453    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:48.897453    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:48.897453    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:48.897453    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:48.897453    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:48.897453    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:48.897453    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:48 GMT
	I0610 12:31:48.897453    8536 round_trippers.go:580]     Audit-Id: 0f401687-d102-45be-bb7a-f1072cc0df72
	I0610 12:31:48.897453    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:49.390115    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:49.390115    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:49.390115    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:49.390115    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:49.393737    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:49.394246    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:49.394246    8536 round_trippers.go:580]     Audit-Id: 467c97ce-fc51-4bd7-9830-ce68ddab6306
	I0610 12:31:49.394246    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:49.394357    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:49.394357    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:49.394357    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:49.394357    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:49 GMT
	I0610 12:31:49.394357    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:49.395546    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:49.395628    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:49.395628    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:49.395628    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:49.398958    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:49.398958    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:49.398958    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:49 GMT
	I0610 12:31:49.398958    8536 round_trippers.go:580]     Audit-Id: d70aee54-6990-4db7-9322-dc924be95bbd
	I0610 12:31:49.398958    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:49.398958    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:49.398958    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:49.398958    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:49.399473    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:49.888695    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:49.888753    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:49.888824    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:49.888824    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:49.892114    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:49.892114    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:49.892114    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:49.892114    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:49.892114    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:49 GMT
	I0610 12:31:49.893074    8536 round_trippers.go:580]     Audit-Id: 27371142-dca1-4859-8980-e9439ec69651
	I0610 12:31:49.893074    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:49.893074    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:49.893264    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:49.894094    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:49.894094    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:49.894094    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:49.894094    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:49.896907    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:49.896907    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:49.896907    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:49 GMT
	I0610 12:31:49.896907    8536 round_trippers.go:580]     Audit-Id: 5132f8bc-6215-4b5f-8bdc-593b27e47cd8
	I0610 12:31:49.896907    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:49.896907    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:49.896907    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:49.896907    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:49.897627    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:49.897897    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:50.388431    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:50.388525    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:50.388525    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:50.388525    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:50.392035    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:50.392035    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:50.392110    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:50.392110    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:50 GMT
	I0610 12:31:50.392110    8536 round_trippers.go:580]     Audit-Id: bccfa588-230e-473e-a59a-eb9f796f86d9
	I0610 12:31:50.392110    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:50.392110    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:50.392110    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:50.392206    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:50.393251    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:50.393434    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:50.393434    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:50.393434    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:50.395743    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:50.396713    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:50.396713    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:50.396713    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:50.396713    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:50.396713    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:50 GMT
	I0610 12:31:50.396713    8536 round_trippers.go:580]     Audit-Id: dcf61100-4ec6-4dcd-a38c-5094f998079e
	I0610 12:31:50.396713    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:50.396987    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:50.890093    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:50.890093    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:50.890093    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:50.890093    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:50.893633    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:50.893633    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:50.893710    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:50.893710    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:50.893710    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:50.893710    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:50 GMT
	I0610 12:31:50.893710    8536 round_trippers.go:580]     Audit-Id: 30c026f2-12d3-498a-a0cc-70a25575e1ff
	I0610 12:31:50.893801    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:50.894039    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:50.895093    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:50.895122    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:50.895122    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:50.895122    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:50.899131    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:50.899131    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:50.899131    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:50.899131    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:50.899131    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:50.899131    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:50.899342    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:50 GMT
	I0610 12:31:50.899342    8536 round_trippers.go:580]     Audit-Id: 44f40e08-43ab-4aaa-80b4-c0c42902607f
	I0610 12:31:50.899600    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:51.394860    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:51.394860    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:51.394860    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:51.394860    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:51.404675    8536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 12:31:51.404675    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:51.404675    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:51.404675    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:51.404675    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:51.404675    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:51.405365    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:51 GMT
	I0610 12:31:51.405365    8536 round_trippers.go:580]     Audit-Id: 7c0e2efe-3c74-4922-b1e5-0d447d5a77bd
	I0610 12:31:51.405685    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:51.406531    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:51.406743    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:51.406743    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:51.406743    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:51.410731    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:51.410779    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:51.410779    8536 round_trippers.go:580]     Audit-Id: 3831c8c1-fe1b-4d77-a44f-71f5f9ac2bfa
	I0610 12:31:51.410779    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:51.410779    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:51.410779    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:51.410779    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:51.410779    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:51 GMT
	I0610 12:31:51.412896    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:51.895996    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:51.895996    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:51.895996    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:51.895996    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:51.899561    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:51.899561    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:51.899561    8536 round_trippers.go:580]     Audit-Id: 75aefa48-8374-491c-ba71-5de238d340bd
	I0610 12:31:51.900602    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:51.900624    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:51.900624    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:51.900624    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:51.900624    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:51 GMT
	I0610 12:31:51.900967    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:51.901814    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:51.901814    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:51.901902    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:51.901902    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:51.905255    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:51.905255    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:51.905255    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:51.905255    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:51.905255    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:51.905255    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:51 GMT
	I0610 12:31:51.905255    8536 round_trippers.go:580]     Audit-Id: 211c7ae1-94a7-4c8a-b7a1-ee8099f8c3aa
	I0610 12:31:51.905255    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:51.905255    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:51.906197    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:52.395067    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:52.395188    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:52.395188    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:52.395188    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:52.403510    8536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:31:52.403510    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:52.403510    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:52 GMT
	I0610 12:31:52.403510    8536 round_trippers.go:580]     Audit-Id: 1df576ea-88e2-4612-9902-f5d0c5db1989
	I0610 12:31:52.403510    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:52.403510    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:52.403510    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:52.403510    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:52.404487    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:52.405323    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:52.405482    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:52.405482    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:52.405482    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:52.412442    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:52.412442    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:52.412442    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:52.412442    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:52 GMT
	I0610 12:31:52.412442    8536 round_trippers.go:580]     Audit-Id: 689662bb-6de8-43c1-8301-f7d3d6334113
	I0610 12:31:52.412442    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:52.412442    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:52.412442    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:52.412442    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:52.894259    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:52.894259    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:52.894259    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:52.894259    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:52.899288    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:52.899288    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:52.899288    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:52.899288    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:52 GMT
	I0610 12:31:52.899288    8536 round_trippers.go:580]     Audit-Id: d5da442c-d5f1-4774-a843-1cfa9a480a59
	I0610 12:31:52.899288    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:52.899288    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:52.899288    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:52.899288    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:52.900422    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:52.900422    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:52.900422    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:52.900422    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:52.902865    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:52.902865    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:52.902865    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:52.902865    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:52 GMT
	I0610 12:31:52.902865    8536 round_trippers.go:580]     Audit-Id: dd53dc69-83b0-449e-9712-b32d87352a80
	I0610 12:31:52.902865    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:52.902865    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:52.902865    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:52.903878    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:53.397709    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:53.398003    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:53.398003    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:53.398003    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:53.402830    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:53.402912    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:53.402912    8536 round_trippers.go:580]     Audit-Id: 648cf4be-8c26-4760-a879-a73c68fee464
	I0610 12:31:53.402912    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:53.402912    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:53.402912    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:53.403000    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:53.403000    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:53 GMT
	I0610 12:31:53.403064    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:53.403907    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:53.403907    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:53.403907    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:53.403907    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:53.407416    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:53.407600    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:53.407600    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:53.407600    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:53.407600    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:53.407725    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:53.407725    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:53 GMT
	I0610 12:31:53.407725    8536 round_trippers.go:580]     Audit-Id: af476a44-29a2-421c-a9f6-a79e047a9919
	I0610 12:31:53.408116    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:53.898819    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:53.898893    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:53.898893    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:53.898893    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:53.902740    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:53.903276    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:53.903276    8536 round_trippers.go:580]     Audit-Id: 9d37fdff-b95b-416c-a615-e50616f5bbbf
	I0610 12:31:53.903276    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:53.903276    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:53.903276    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:53.903276    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:53.903276    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:53 GMT
	I0610 12:31:53.903657    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:53.904571    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:53.904642    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:53.904642    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:53.904642    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:53.911066    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:53.911066    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:53.911066    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:53.911066    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:53.911066    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:53 GMT
	I0610 12:31:53.911066    8536 round_trippers.go:580]     Audit-Id: aa07ce39-2b0a-4c55-9e6d-999f5cd3c569
	I0610 12:31:53.911066    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:53.911066    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:53.911066    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:53.911066    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:54.384189    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:54.384189    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:54.384189    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:54.384189    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:54.386897    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:54.386897    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:54.387890    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:54.387890    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:54.387890    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:54.387928    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:54.387928    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:54 GMT
	I0610 12:31:54.387928    8536 round_trippers.go:580]     Audit-Id: 74864d97-b595-4719-ad06-d69234f6cc38
	I0610 12:31:54.388307    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:54.388785    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:54.389321    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:54.389374    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:54.389374    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:54.392406    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:54.392489    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:54.392489    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:54.392489    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:54.392489    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:54.392547    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:54 GMT
	I0610 12:31:54.392547    8536 round_trippers.go:580]     Audit-Id: a954f982-1ca9-4c7d-9439-f0642919de98
	I0610 12:31:54.392547    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:54.393013    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:54.889555    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:54.889555    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:54.889555    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:54.889555    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:54.893119    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:54.893119    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:54.893119    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:54 GMT
	I0610 12:31:54.894097    8536 round_trippers.go:580]     Audit-Id: 331dc0da-d502-4bc8-abc1-99c921623748
	I0610 12:31:54.894097    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:54.894097    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:54.894097    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:54.894199    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:54.894275    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:54.894275    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:54.894275    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:54.894275    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:54.894275    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:54.899183    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:54.899183    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:54.899183    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:54 GMT
	I0610 12:31:54.899183    8536 round_trippers.go:580]     Audit-Id: 2a916e05-56a4-4d92-bbd6-e045002cf12d
	I0610 12:31:54.899183    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:54.899183    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:54.899183    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:54.899183    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:54.899840    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:55.387314    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:55.387314    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:55.387314    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:55.387314    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:55.391061    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:55.391061    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:55.391061    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:55.391061    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:55.391061    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:55 GMT
	I0610 12:31:55.391061    8536 round_trippers.go:580]     Audit-Id: cceb467b-b730-4a00-b13d-702ac7274d72
	I0610 12:31:55.391061    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:55.391061    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:55.391487    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:55.392294    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:55.392377    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:55.392404    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:55.392404    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:55.394408    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:55.394408    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:55.394408    8536 round_trippers.go:580]     Audit-Id: ab2052fb-a56e-4a09-94ca-ade21d8ff858
	I0610 12:31:55.394408    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:55.394408    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:55.394408    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:55.395319    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:55.395319    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:55 GMT
	I0610 12:31:55.395810    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:55.886020    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:55.886093    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:55.886093    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:55.886154    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:55.890321    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:55.890321    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:55.890321    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:55 GMT
	I0610 12:31:55.890321    8536 round_trippers.go:580]     Audit-Id: def65afa-0182-4015-b250-de270ddcbb81
	I0610 12:31:55.890393    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:55.890393    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:55.890393    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:55.890393    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:55.890593    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:55.891423    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:55.891506    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:55.891506    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:55.891580    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:55.897232    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:55.897232    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:55.897232    8536 round_trippers.go:580]     Audit-Id: 01ff2a75-fb24-4cf4-9dd0-d1e8ec935dae
	I0610 12:31:55.897232    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:55.897232    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:55.897232    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:55.897232    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:55.897232    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:55 GMT
	I0610 12:31:55.897887    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:56.386215    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:56.386215    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:56.386215    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:56.386215    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:56.389827    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:56.389827    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:56.389827    8536 round_trippers.go:580]     Audit-Id: 77540a06-2c66-48fb-83c1-05169ef67daa
	I0610 12:31:56.389827    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:56.389827    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:56.389827    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:56.389827    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:56.389827    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:56 GMT
	I0610 12:31:56.390372    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:56.391198    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:56.391198    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:56.391198    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:56.391198    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:56.394518    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:56.394518    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:56.394518    8536 round_trippers.go:580]     Audit-Id: 6784f723-a5b3-4dfd-83c5-6adfa15cacd4
	I0610 12:31:56.394518    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:56.394518    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:56.394518    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:56.394518    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:56.394518    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:56 GMT
	I0610 12:31:56.394903    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:56.395162    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:56.884377    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:56.884377    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:56.884377    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:56.884377    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:56.888022    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:56.888022    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:56.888022    8536 round_trippers.go:580]     Audit-Id: 0c236ae1-d05e-4acd-b0a0-54c467334ef1
	I0610 12:31:56.888022    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:56.888022    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:56.888022    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:56.888918    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:56.888918    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:56 GMT
	I0610 12:31:56.889117    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:56.889924    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:56.890057    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:56.890057    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:56.890057    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:56.893564    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:56.893616    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:56.893616    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:56.893616    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:56.893616    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:56.893616    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:56.893616    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:56 GMT
	I0610 12:31:56.893616    8536 round_trippers.go:580]     Audit-Id: 9a766eb2-2e9f-44db-b4d2-96c65db5c0aa
	I0610 12:31:56.893616    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:57.384342    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:57.384430    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:57.384430    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:57.384532    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:57.389609    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:57.389609    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:57.389609    8536 round_trippers.go:580]     Audit-Id: c1c72663-29df-492e-b912-5481e6d7c9d4
	I0610 12:31:57.389609    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:57.389609    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:57.389609    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:57.389609    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:57.389609    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:57 GMT
	I0610 12:31:57.390233    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:57.390999    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:57.391065    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:57.391065    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:57.391065    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:57.395075    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:57.395983    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:57.395983    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:57.395983    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:57 GMT
	I0610 12:31:57.395983    8536 round_trippers.go:580]     Audit-Id: 8dd3578f-0c5f-4ca7-b52e-5c48f689533c
	I0610 12:31:57.395983    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:57.395983    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:57.395983    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:57.395983    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:57.887666    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:57.887666    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:57.887666    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:57.887666    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:57.891243    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:57.891243    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:57.891243    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:57.891243    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:57.891243    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:57.892203    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:57 GMT
	I0610 12:31:57.892203    8536 round_trippers.go:580]     Audit-Id: 0c7f820f-acf5-46f2-8184-90b5914fca2f
	I0610 12:31:57.892203    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:57.892412    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:57.893150    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:57.893211    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:57.893211    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:57.893211    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:57.896460    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:57.896460    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:57.896460    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:57.897153    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:57 GMT
	I0610 12:31:57.897153    8536 round_trippers.go:580]     Audit-Id: 3c48ed7d-2cca-4e91-9b50-2a3004eceb65
	I0610 12:31:57.897153    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:57.897153    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:57.897153    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:57.897219    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:58.389037    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:58.389037    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:58.389132    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:58.389132    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:58.392489    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:58.392489    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:58.392489    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:58.392489    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:58.392489    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:58 GMT
	I0610 12:31:58.392489    8536 round_trippers.go:580]     Audit-Id: d82fd350-72f6-4dd0-b873-2a60338d878f
	I0610 12:31:58.392489    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:58.392489    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:58.393796    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:58.394552    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:58.394552    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:58.394552    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:58.394552    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:58.401373    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:58.401896    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:58.401896    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:58 GMT
	I0610 12:31:58.401896    8536 round_trippers.go:580]     Audit-Id: 4e723312-8171-49eb-976e-299d1b32353f
	I0610 12:31:58.401896    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:58.401896    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:58.401896    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:58.401896    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:58.402805    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:58.402805    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:58.888868    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:58.888868    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:58.888868    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:58.888868    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:58.894053    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:58.894125    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:58.894125    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:58.894125    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:58.894125    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:58.894125    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:58.894125    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:58 GMT
	I0610 12:31:58.894125    8536 round_trippers.go:580]     Audit-Id: cb3c92c1-7b69-4780-a616-000f6f9686b7
	I0610 12:31:58.894125    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:58.895216    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:58.895315    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:58.895315    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:58.895315    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:58.898143    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:58.898143    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:58.898143    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:58.898143    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:58.898143    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:58 GMT
	I0610 12:31:58.898143    8536 round_trippers.go:580]     Audit-Id: 3b62be86-b36c-4b59-938d-145866100929
	I0610 12:31:58.898143    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:58.898143    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:58.898655    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:59.389230    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:59.389558    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:59.389558    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:59.389558    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:59.393456    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:59.393456    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:59.393456    8536 round_trippers.go:580]     Audit-Id: aab38031-b235-4216-8532-a936860f3f8e
	I0610 12:31:59.393456    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:59.393456    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:59.393456    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:59.393456    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:59.393456    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:59 GMT
	I0610 12:31:59.393863    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:59.394675    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:59.394745    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:59.394745    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:59.394745    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:59.397091    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:59.397091    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:59.397091    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:59.397919    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:59.397919    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:59.397919    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:59 GMT
	I0610 12:31:59.397919    8536 round_trippers.go:580]     Audit-Id: b39d8db6-cfab-497e-8cc5-94ee57de9047
	I0610 12:31:59.397919    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:59.398385    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:59.886748    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:59.886811    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:59.886905    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:59.886905    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:59.890344    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:59.891102    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:59.891102    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:59.891102    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:59 GMT
	I0610 12:31:59.891102    8536 round_trippers.go:580]     Audit-Id: d2a32bb1-a3ff-4306-aac9-3487138956d7
	I0610 12:31:59.891102    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:59.891102    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:59.891102    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:59.891427    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:59.892165    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:59.892165    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:59.892165    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:59.892165    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:59.894722    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:59.894722    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:59.894722    8536 round_trippers.go:580]     Audit-Id: 02518c0b-5c4e-410e-9b6b-bcc7be846a89
	I0610 12:31:59.894722    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:59.895209    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:59.895209    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:59.895209    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:59.895209    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:59 GMT
	I0610 12:31:59.895338    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:00.387007    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:00.387007    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:00.387007    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:00.387007    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:00.391259    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:00.391259    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:00.391698    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:00.391698    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:00.391698    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:00.391698    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:00.391698    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:00 GMT
	I0610 12:32:00.391698    8536 round_trippers.go:580]     Audit-Id: 09ad5030-5939-4924-a103-8a2424c75246
	I0610 12:32:00.391960    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:00.392886    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:00.392886    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:00.392886    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:00.392886    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:00.396469    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:00.396469    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:00.396617    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:00.396617    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:00.396617    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:00.396617    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:00.396617    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:00 GMT
	I0610 12:32:00.396617    8536 round_trippers.go:580]     Audit-Id: b311b9ff-02f0-4547-88a3-cfa17fb4a565
	I0610 12:32:00.396775    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:00.897899    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:00.898262    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:00.898262    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:00.898262    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:00.902004    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:00.902185    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:00.902185    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:00 GMT
	I0610 12:32:00.902185    8536 round_trippers.go:580]     Audit-Id: 5972290a-208f-4029-9080-37557828a965
	I0610 12:32:00.902185    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:00.902185    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:00.902185    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:00.902185    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:00.903142    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:00.905469    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:00.905469    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:00.905877    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:00.905877    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:00.908613    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:00.908613    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:00.908613    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:00.908613    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:00.908613    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:00.908613    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:00.908613    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:00 GMT
	I0610 12:32:00.908613    8536 round_trippers.go:580]     Audit-Id: 0ff10877-cd2d-4d45-b7b7-3794fc9f8fbb
	I0610 12:32:00.909756    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:00.910226    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:32:01.398130    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:01.398130    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:01.398130    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:01.398130    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:01.402736    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:01.402736    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:01.402736    8536 round_trippers.go:580]     Audit-Id: 64c95df3-f492-42b4-a5cf-7f8b374e5ad4
	I0610 12:32:01.402736    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:01.402736    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:01.402844    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:01.402844    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:01.402844    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:01 GMT
	I0610 12:32:01.403066    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:01.403739    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:01.403739    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:01.403739    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:01.403739    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:01.406583    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:01.406583    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:01.406583    8536 round_trippers.go:580]     Audit-Id: ecba7fbc-785f-405f-bfe6-ae982452641d
	I0610 12:32:01.406583    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:01.406583    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:01.406583    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:01.407186    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:01.407186    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:01 GMT
	I0610 12:32:01.407676    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:01.898052    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:01.898052    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:01.898182    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:01.898182    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:01.903108    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:01.903108    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:01.903108    8536 round_trippers.go:580]     Audit-Id: 180f4d90-f990-48d8-8eff-ed2063c95c66
	I0610 12:32:01.903798    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:01.903798    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:01.903798    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:01.903798    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:01.903798    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:01 GMT
	I0610 12:32:01.904024    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:01.904845    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:01.904845    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:01.904903    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:01.904903    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:01.907325    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:01.907325    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:01.907325    8536 round_trippers.go:580]     Audit-Id: 7805a95c-9503-417e-98f7-10bde24f6457
	I0610 12:32:01.907325    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:01.907325    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:01.907325    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:01.907325    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:01.907325    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:01 GMT
	I0610 12:32:01.908386    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:02.384478    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:02.384478    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:02.384478    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:02.384478    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:02.386594    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:02.386594    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:02.386594    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:02.386594    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:02.386594    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:02.386594    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:02.386594    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:02 GMT
	I0610 12:32:02.386594    8536 round_trippers.go:580]     Audit-Id: ff808251-d9c0-4cdd-a543-148996fb6689
	I0610 12:32:02.388707    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:02.389552    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:02.389552    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:02.389552    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:02.389552    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:02.394130    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:02.394130    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:02.394130    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:02.394130    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:02.394130    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:02.394130    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:02 GMT
	I0610 12:32:02.394130    8536 round_trippers.go:580]     Audit-Id: 1f1ef3ed-633c-4d6f-8235-c54e80ec57ed
	I0610 12:32:02.394130    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:02.394867    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:02.898471    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:02.898553    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:02.898553    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:02.898553    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:02.901990    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:02.901990    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:02.901990    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:02 GMT
	I0610 12:32:02.901990    8536 round_trippers.go:580]     Audit-Id: 76f9a2eb-70c9-4754-841b-fd01e32a08f2
	I0610 12:32:02.901990    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:02.902857    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:02.902857    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:02.902857    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:02.903216    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:02.904495    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:02.904597    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:02.904597    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:02.904597    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:02.909123    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:02.909409    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:02.909465    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:02.909465    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:02.909465    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:02.909536    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:02.909536    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:02 GMT
	I0610 12:32:02.909599    8536 round_trippers.go:580]     Audit-Id: b8f79694-e699-4c29-8be6-daf0bead409f
	I0610 12:32:02.909981    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:02.910792    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:32:03.397740    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:03.397830    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:03.397830    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:03.397830    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:03.400218    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:03.400218    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:03.400218    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:03.400218    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:03 GMT
	I0610 12:32:03.401382    8536 round_trippers.go:580]     Audit-Id: 020e7716-3e5d-4506-8086-93a851176bb1
	I0610 12:32:03.401404    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:03.401423    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:03.401423    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:03.401476    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:03.402567    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:03.402567    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:03.402567    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:03.402567    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:03.405133    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:03.405133    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:03.405407    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:03.405407    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:03.405407    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:03 GMT
	I0610 12:32:03.405407    8536 round_trippers.go:580]     Audit-Id: 0f513b19-db4d-4167-9a56-ebe05280b01e
	I0610 12:32:03.405407    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:03.405407    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:03.405649    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:03.893670    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:03.893670    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:03.893872    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:03.893872    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:03.897191    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:03.897191    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:03.897191    8536 round_trippers.go:580]     Audit-Id: 08204798-b862-4224-a080-43229ee16660
	I0610 12:32:03.897191    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:03.897795    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:03.897795    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:03.897795    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:03.897795    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:03 GMT
	I0610 12:32:03.898006    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:03.899039    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:03.899185    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:03.899185    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:03.899185    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:03.903329    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:03.903329    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:03.903329    8536 round_trippers.go:580]     Audit-Id: f3d5b8ef-1bd6-4132-bedc-1cfad2a97dc7
	I0610 12:32:03.903329    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:03.903329    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:03.903329    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:03.903329    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:03.903329    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:03 GMT
	I0610 12:32:03.903329    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:04.397369    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:04.397476    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:04.397476    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:04.397476    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:04.400937    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:04.400937    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:04.400937    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:04.400937    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:04.400937    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:04.401074    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:04 GMT
	I0610 12:32:04.401074    8536 round_trippers.go:580]     Audit-Id: 85f87a79-354e-4891-a1be-7f3a9425f60d
	I0610 12:32:04.401074    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:04.401421    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:04.402996    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:04.404655    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:04.404655    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:04.404655    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:04.408992    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:04.408992    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:04.408992    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:04.408992    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:04.408992    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:04.408992    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:04 GMT
	I0610 12:32:04.408992    8536 round_trippers.go:580]     Audit-Id: 43e8a598-1ec4-4a1b-a363-3585b259a79e
	I0610 12:32:04.408992    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:04.410084    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:04.891338    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:04.891460    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:04.891460    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:04.891460    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:04.894807    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:04.894986    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:04.894986    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:04.894986    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:04 GMT
	I0610 12:32:04.894986    8536 round_trippers.go:580]     Audit-Id: 1f313cdc-e0e4-4263-8f9b-bbefee4cd981
	I0610 12:32:04.894986    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:04.894986    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:04.894986    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:04.895276    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:04.896059    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:04.896092    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:04.896092    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:04.896190    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:04.899401    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:04.899401    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:04.899401    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:04 GMT
	I0610 12:32:04.899401    8536 round_trippers.go:580]     Audit-Id: 8505f402-ef3c-451b-8dee-9622097faedb
	I0610 12:32:04.899401    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:04.899401    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:04.899401    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:04.899401    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:04.900269    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:05.392721    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:05.392721    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:05.392721    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:05.392721    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:05.400699    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:32:05.400699    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:05.400699    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:05.400699    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:05 GMT
	I0610 12:32:05.400699    8536 round_trippers.go:580]     Audit-Id: 92a99aef-9da1-4a5a-91bd-24d7c020b40e
	I0610 12:32:05.400699    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:05.400699    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:05.400699    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:05.400699    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:05.401728    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:05.401781    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:05.401781    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:05.401781    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:05.408287    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:32:05.408349    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:05.408349    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:05.408349    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:05.408349    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:05.408349    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:05 GMT
	I0610 12:32:05.408349    8536 round_trippers.go:580]     Audit-Id: a82c4c41-3687-4671-a8c0-ad49faef5770
	I0610 12:32:05.408349    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:05.408639    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:05.409481    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:32:05.883733    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:05.883801    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:05.883801    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:05.883801    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:05.889843    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:32:05.890002    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:05.890002    8536 round_trippers.go:580]     Audit-Id: 1f6902d7-3ab7-4ecd-88e8-2d8210758fbe
	I0610 12:32:05.890002    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:05.890002    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:05.890090    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:05.890090    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:05.890090    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:05 GMT
	I0610 12:32:05.890391    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:05.891308    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:05.891401    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:05.891401    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:05.891401    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:05.894692    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:05.894692    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:05.894692    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:05.894692    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:05 GMT
	I0610 12:32:05.894692    8536 round_trippers.go:580]     Audit-Id: 25245a7e-9886-43af-8201-7119096744a2
	I0610 12:32:05.894692    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:05.894692    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:05.894692    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:05.895556    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.383837    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:06.383915    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.383958    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.383958    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.386745    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.386745    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.386745    8536 round_trippers.go:580]     Audit-Id: 9af42174-80c6-4ad3-b20b-e3c7bff12947
	I0610 12:32:06.386745    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.386745    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.386745    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.386745    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.386745    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.387810    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:06.388373    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.388524    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.388524    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.388524    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.390830    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.391669    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.391669    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.391669    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.391669    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.391669    8536 round_trippers.go:580]     Audit-Id: 29b32f68-30cf-44e8-9987-6a7f27022936
	I0610 12:32:06.391669    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.391669    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.392016    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.890846    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:06.890908    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.890908    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.890908    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.898711    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:32:06.898711    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.898711    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.898711    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.898711    8536 round_trippers.go:580]     Audit-Id: 27435d63-0bdc-4f00-9adb-9527eb6a456c
	I0610 12:32:06.898818    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.898818    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.898818    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.898948    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1827","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0610 12:32:06.899777    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.899777    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.899777    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.899777    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.903363    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:06.904148    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.904148    8536 round_trippers.go:580]     Audit-Id: 429a5b7f-7329-4fbe-8d96-817e9acce578
	I0610 12:32:06.904148    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.904148    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.904148    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.904148    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.904148    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.904425    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.904425    8536 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.904425    8536 pod_ready.go:81] duration metric: took 26.0212161s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.904425    8536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.905002    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:32:06.905045    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.905100    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.905100    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.912112    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:32:06.912112    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.912112    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.912112    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.912112    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.912112    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.912112    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.912112    8536 round_trippers.go:580]     Audit-Id: d2bd7eca-1670-4da0-b9a8-1a6449ada2e5
	I0610 12:32:06.912112    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"f9259e5e-61e9-4252-b7c6-de5d499eb9c1","resourceVersion":"1765","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.150.144:2379","kubernetes.io/config.hash":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.mirror":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.seen":"2024-06-10T12:30:54.120335207Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0610 12:32:06.913144    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.913240    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.913284    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.913284    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.916692    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:06.916692    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.916692    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.916692    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.916692    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.916692    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.916692    8536 round_trippers.go:580]     Audit-Id: a80f8427-8628-48fd-8a2a-4c5fc77cd525
	I0610 12:32:06.916692    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.916692    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.917456    8536 pod_ready.go:92] pod "etcd-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.917456    8536 pod_ready.go:81] duration metric: took 13.0311ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.917600    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.917727    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:32:06.917727    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.917784    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.917784    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.920518    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.920518    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.920518    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.920518    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.920894    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.920894    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.920894    8536 round_trippers.go:580]     Audit-Id: 6fec4a4e-9615-4800-818f-262efdda4b7b
	I0610 12:32:06.920894    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.921146    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"2cf29b2c-a2a9-46ec-bbc8-fe884e97df06","resourceVersion":"1748","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.150.144:8443","kubernetes.io/config.hash":"180cf4cc399d604c28cc4df1442ebd5a","kubernetes.io/config.mirror":"180cf4cc399d604c28cc4df1442ebd5a","kubernetes.io/config.seen":"2024-06-10T12:30:54.115839018Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0610 12:32:06.921651    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.921710    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.921710    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.921710    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.923951    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.924645    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.924725    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.924725    8536 round_trippers.go:580]     Audit-Id: 2da9c252-4da7-489b-ab91-7a2644ba3584
	I0610 12:32:06.924725    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.924725    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.924725    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.924725    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.924725    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.925288    8536 pod_ready.go:92] pod "kube-apiserver-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.925288    8536 pod_ready.go:81] duration metric: took 7.6881ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.925288    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.925437    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:32:06.925437    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.925437    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.925437    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.928073    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.928073    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.928750    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.928750    8536 round_trippers.go:580]     Audit-Id: 8a0a0c9a-163a-419d-ab27-d8be40317c05
	I0610 12:32:06.928750    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.928750    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.928750    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.928750    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.929080    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"1767","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0610 12:32:06.929699    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.929764    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.929764    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.929764    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.933370    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:06.933370    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.933370    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.933370    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.933370    8536 round_trippers.go:580]     Audit-Id: 30e99097-8527-4d36-b4bf-efe6d6f664e6
	I0610 12:32:06.933370    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.933370    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.933370    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.933370    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.933370    8536 pod_ready.go:92] pod "kube-controller-manager-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.934364    8536 pod_ready.go:81] duration metric: took 9.0756ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.934364    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.934364    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:32:06.934364    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.934364    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.934364    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.937391    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:06.937855    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.937855    8536 round_trippers.go:580]     Audit-Id: aca57c61-3cfa-4d38-bdb3-b0a1f58731dd
	I0610 12:32:06.937855    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.937855    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.937855    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.937855    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.937855    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.938229    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"1665","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0610 12:32:06.938864    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.938923    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.938923    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.938923    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.940667    8536 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 12:32:06.941536    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.941536    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.941601    8536 round_trippers.go:580]     Audit-Id: 6c7d4bf2-4348-46c0-83a3-349388174104
	I0610 12:32:06.941601    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.941601    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.941601    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.941601    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.941601    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.941601    8536 pod_ready.go:92] pod "kube-proxy-nrpvt" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.942182    8536 pod_ready.go:81] duration metric: took 7.8183ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.942323    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:07.094112    8536 request.go:629] Waited for 151.6022ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:32:07.094347    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:32:07.094347    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.094347    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.094347    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.097319    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:07.097453    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.097453    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.097453    8536 round_trippers.go:580]     Audit-Id: 61ff617e-56ee-4ec1-b07b-35f5078336fc
	I0610 12:32:07.097453    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.097453    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.097608    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.097608    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.098035    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rx2b2","generateName":"kube-proxy-","namespace":"kube-system","uid":"ce59a99b-a561-4598-9399-147f748433a2","resourceVersion":"1632","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0610 12:32:07.296812    8536 request.go:629] Waited for 197.6622ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:32:07.296922    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:32:07.297040    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.297040    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.297040    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.304019    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:32:07.304937    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.304937    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.304937    8536 round_trippers.go:580]     Audit-Id: eaa352c8-3a82-448c-8873-b1d70fb7b43d
	I0610 12:32:07.304937    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.304937    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.304937    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.304937    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.304937    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"1817","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4583 chars]
	I0610 12:32:07.304937    8536 pod_ready.go:97] node "multinode-813300-m02" hosting pod "kube-proxy-rx2b2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m02" has status "Ready":"Unknown"
	I0610 12:32:07.304937    8536 pod_ready.go:81] duration metric: took 362.6107ms for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	E0610 12:32:07.304937    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300-m02" hosting pod "kube-proxy-rx2b2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m02" has status "Ready":"Unknown"
	I0610 12:32:07.304937    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vw56h" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:07.499191    8536 request.go:629] Waited for 194.0542ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vw56h
	I0610 12:32:07.499381    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vw56h
	I0610 12:32:07.499381    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.499381    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.499381    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.503911    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:07.504002    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.504002    8536 round_trippers.go:580]     Audit-Id: 082572c8-0f20-449d-8a16-f6239b8e40de
	I0610 12:32:07.504002    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.504002    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.504002    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.504002    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.504070    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.504070    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vw56h","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3f9e738-89d2-4776-a212-a1ca28952f7c","resourceVersion":"1595","creationTimestamp":"2024-06-10T12:25:52Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0610 12:32:07.702255    8536 request.go:629] Waited for 196.7957ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m03
	I0610 12:32:07.702255    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m03
	I0610 12:32:07.702255    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.702475    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.702475    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.706939    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:07.706939    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.706939    8536 round_trippers.go:580]     Audit-Id: 48ed5ece-df30-4f8a-8d72-813f3ac5e860
	I0610 12:32:07.706939    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.706939    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.706939    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.706939    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.707259    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.707491    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m03","uid":"7d0b0b62-45c8-40aa-9f7a-5bb189395355","resourceVersion":"1813","creationTimestamp":"2024-06-10T12:25:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_25_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4413 chars]
	I0610 12:32:07.708036    8536 pod_ready.go:97] node "multinode-813300-m03" hosting pod "kube-proxy-vw56h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m03" has status "Ready":"Unknown"
	I0610 12:32:07.708036    8536 pod_ready.go:81] duration metric: took 403.096ms for pod "kube-proxy-vw56h" in "kube-system" namespace to be "Ready" ...
	E0610 12:32:07.708036    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300-m03" hosting pod "kube-proxy-vw56h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m03" has status "Ready":"Unknown"
	I0610 12:32:07.708104    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:07.903404    8536 request.go:629] Waited for 195.07ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:32:07.903404    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:32:07.903404    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.903404    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.903404    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.908263    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:07.908263    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.908263    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.908263    8536 round_trippers.go:580]     Audit-Id: 2b14cc46-f47a-4fa1-bfa8-9f0430821547
	I0610 12:32:07.908420    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.908420    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.908420    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.908420    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.908692    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"1742","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0610 12:32:08.091840    8536 request.go:629] Waited for 182.2159ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:08.092125    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:08.092125    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:08.092125    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:08.092125    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:08.096985    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:08.096985    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:08.096985    8536 round_trippers.go:580]     Audit-Id: 2dc8d8ea-72d2-43e5-bff1-f537d7d89d5c
	I0610 12:32:08.096985    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:08.096985    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:08.096985    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:08.096985    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:08.096985    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:08 GMT
	I0610 12:32:08.097927    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:08.098297    8536 pod_ready.go:92] pod "kube-scheduler-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:08.098297    8536 pod_ready.go:81] duration metric: took 390.1901ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:08.098494    8536 pod_ready.go:38] duration metric: took 27.231341s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:32:08.098494    8536 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:32:08.108011    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 12:32:08.132906    8536 command_runner.go:130] > d7941126134f
	I0610 12:32:08.133490    8536 logs.go:276] 1 containers: [d7941126134f]
	I0610 12:32:08.147897    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 12:32:08.171623    8536 command_runner.go:130] > 877ee07c1499
	I0610 12:32:08.172935    8536 logs.go:276] 1 containers: [877ee07c1499]
	I0610 12:32:08.181505    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 12:32:08.212642    8536 command_runner.go:130] > 24f3f7e041f9
	I0610 12:32:08.213523    8536 command_runner.go:130] > f2e39052db19
	I0610 12:32:08.213637    8536 logs.go:276] 2 containers: [24f3f7e041f9 f2e39052db19]
	I0610 12:32:08.222946    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 12:32:08.249302    8536 command_runner.go:130] > d90e72ef4670
	I0610 12:32:08.249302    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:32:08.249302    8536 logs.go:276] 2 containers: [d90e72ef4670 bd1a6cd98743]
	I0610 12:32:08.261166    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 12:32:08.286088    8536 command_runner.go:130] > 1de5fa0ef838
	I0610 12:32:08.286088    8536 command_runner.go:130] > afad8b05897e
	I0610 12:32:08.287155    8536 logs.go:276] 2 containers: [1de5fa0ef838 afad8b05897e]
	I0610 12:32:08.300463    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 12:32:08.327222    8536 command_runner.go:130] > 3bee53d5fef9
	I0610 12:32:08.327222    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:32:08.327222    8536 logs.go:276] 2 containers: [3bee53d5fef9 f1409bf44ff1]
	I0610 12:32:08.337171    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 12:32:08.363781    8536 command_runner.go:130] > c3c4316beca6
	I0610 12:32:08.363781    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:32:08.366172    8536 logs.go:276] 2 containers: [c3c4316beca6 c39d54960e7d]
	I0610 12:32:08.366172    8536 logs.go:123] Gathering logs for kube-scheduler [bd1a6cd98743] ...
	I0610 12:32:08.366294    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1a6cd98743"
	I0610 12:32:08.397218    8536 command_runner.go:130] ! I0610 12:07:55.711360       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.417322       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.417963       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.418046       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.418071       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.459055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.460659       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.464904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.464952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.466483       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.466650       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.502453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! E0610 12:07:57.507264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.503672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.506076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.506243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.506320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.506362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.397845    8536 command_runner.go:130] ! W0610 12:07:57.506402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:08.397943    8536 command_runner.go:130] ! W0610 12:07:57.506651       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.398011    8536 command_runner.go:130] ! W0610 12:07:57.506722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:08.398078    8536 command_runner.go:130] ! W0610 12:07:57.507113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398078    8536 command_runner.go:130] ! W0610 12:07:57.507193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:08.398161    8536 command_runner.go:130] ! E0610 12:07:57.511548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398242    8536 command_runner.go:130] ! E0610 12:07:57.511795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:08.398328    8536 command_runner.go:130] ! E0610 12:07:57.512240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398387    8536 command_runner.go:130] ! E0610 12:07:57.512647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:08.398470    8536 command_runner.go:130] ! E0610 12:07:57.515128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398532    8536 command_runner.go:130] ! E0610 12:07:57.515218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:08.398532    8536 command_runner.go:130] ! E0610 12:07:57.515698       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.398665    8536 command_runner.go:130] ! E0610 12:07:57.516017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:08.398733    8536 command_runner.go:130] ! E0610 12:07:57.516332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398795    8536 command_runner.go:130] ! E0610 12:07:57.516529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:08.398849    8536 command_runner.go:130] ! W0610 12:07:57.537276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:08.398994    8536 command_runner.go:130] ! E0610 12:07:57.537491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:57.537680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:57.538611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:57.537622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:57.538734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:57.538013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:57.539237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:58.345815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:58.345914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:58.356843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:58.357045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:58.406587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:58.406863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:58.426795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399592    8536 command_runner.go:130] ! E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.400198    8536 command_runner.go:130] ! E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.400198    8536 command_runner.go:130] ! W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:08.400198    8536 command_runner.go:130] ! E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:08.400392    8536 command_runner.go:130] ! W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:08.400392    8536 command_runner.go:130] ! E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:08.400466    8536 command_runner.go:130] ! I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:08.400527    8536 command_runner.go:130] ! E0610 12:28:16.298586       1 run.go:74] "command failed" err="finished without leader elect"
	I0610 12:32:08.413382    8536 logs.go:123] Gathering logs for kube-proxy [1de5fa0ef838] ...
	I0610 12:32:08.413382    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1de5fa0ef838"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.254962       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.294630       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.150.144"]
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.403290       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.403338       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.403357       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.416009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.416300       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.416345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.424657       1 config.go:192] "Starting service config controller"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.425325       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.425369       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.425382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.432037       1 config.go:319] "Starting node config controller"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.432075       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.535663       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.535744       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.535786       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:08.447997    8536 logs.go:123] Gathering logs for kube-apiserver [d7941126134f] ...
	I0610 12:32:08.448111    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7941126134f"
	I0610 12:32:08.477367    8536 command_runner.go:130] ! I0610 12:30:56.783636       1 options.go:221] external host was not specified, using 172.17.150.144
	I0610 12:32:08.477367    8536 command_runner.go:130] ! I0610 12:30:56.802716       1 server.go:148] Version: v1.30.1
	I0610 12:32:08.477367    8536 command_runner.go:130] ! I0610 12:30:56.802771       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.477919    8536 command_runner.go:130] ! I0610 12:30:57.206580       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 12:32:08.478050    8536 command_runner.go:130] ! I0610 12:30:57.224598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:08.478073    8536 command_runner.go:130] ! I0610 12:30:57.225809       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 12:32:08.478209    8536 command_runner.go:130] ! I0610 12:30:57.226002       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 12:32:08.478269    8536 command_runner.go:130] ! I0610 12:30:57.226365       1 instance.go:299] Using reconciler: lease
	I0610 12:32:08.478269    8536 command_runner.go:130] ! I0610 12:30:57.637999       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0610 12:32:08.478312    8536 command_runner.go:130] ! W0610 12:30:57.638403       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478312    8536 command_runner.go:130] ! I0610 12:30:58.007103       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0610 12:32:08.478312    8536 command_runner.go:130] ! I0610 12:30:58.008169       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0610 12:32:08.478365    8536 command_runner.go:130] ! I0610 12:30:58.357732       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0610 12:32:08.478365    8536 command_runner.go:130] ! I0610 12:30:58.553660       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0610 12:32:08.478407    8536 command_runner.go:130] ! I0610 12:30:58.567826       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0610 12:32:08.478407    8536 command_runner.go:130] ! W0610 12:30:58.567936       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478477    8536 command_runner.go:130] ! W0610 12:30:58.567947       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.478477    8536 command_runner.go:130] ! I0610 12:30:58.569137       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0610 12:32:08.478518    8536 command_runner.go:130] ! W0610 12:30:58.569236       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478518    8536 command_runner.go:130] ! I0610 12:30:58.570636       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0610 12:32:08.478518    8536 command_runner.go:130] ! I0610 12:30:58.572063       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0610 12:32:08.478570    8536 command_runner.go:130] ! W0610 12:30:58.572082       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0610 12:32:08.478570    8536 command_runner.go:130] ! W0610 12:30:58.572088       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0610 12:32:08.478603    8536 command_runner.go:130] ! I0610 12:30:58.575154       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0610 12:32:08.478653    8536 command_runner.go:130] ! W0610 12:30:58.575194       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0610 12:32:08.478653    8536 command_runner.go:130] ! I0610 12:30:58.576862       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0610 12:32:08.478704    8536 command_runner.go:130] ! W0610 12:30:58.576966       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478704    8536 command_runner.go:130] ! W0610 12:30:58.576976       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.478754    8536 command_runner.go:130] ! I0610 12:30:58.577920       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0610 12:32:08.478754    8536 command_runner.go:130] ! W0610 12:30:58.578059       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478803    8536 command_runner.go:130] ! W0610 12:30:58.578305       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478871    8536 command_runner.go:130] ! I0610 12:30:58.579295       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0610 12:32:08.478907    8536 command_runner.go:130] ! I0610 12:30:58.581807       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0610 12:32:08.478907    8536 command_runner.go:130] ! W0610 12:30:58.581943       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478907    8536 command_runner.go:130] ! W0610 12:30:58.582127       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.478907    8536 command_runner.go:130] ! I0610 12:30:58.583254       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0610 12:32:08.478971    8536 command_runner.go:130] ! W0610 12:30:58.583359       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479047    8536 command_runner.go:130] ! W0610 12:30:58.583370       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479047    8536 command_runner.go:130] ! I0610 12:30:58.594003       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0610 12:32:08.479094    8536 command_runner.go:130] ! W0610 12:30:58.594046       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0610 12:32:08.479094    8536 command_runner.go:130] ! I0610 12:30:58.597008       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0610 12:32:08.479176    8536 command_runner.go:130] ! W0610 12:30:58.597028       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479176    8536 command_runner.go:130] ! W0610 12:30:58.597047       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479176    8536 command_runner.go:130] ! I0610 12:30:58.597658       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0610 12:32:08.479238    8536 command_runner.go:130] ! W0610 12:30:58.597679       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479238    8536 command_runner.go:130] ! W0610 12:30:58.597686       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479238    8536 command_runner.go:130] ! I0610 12:30:58.602889       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0610 12:32:08.479305    8536 command_runner.go:130] ! W0610 12:30:58.602907       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479305    8536 command_runner.go:130] ! W0610 12:30:58.602913       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479305    8536 command_runner.go:130] ! I0610 12:30:58.608646       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0610 12:32:08.479305    8536 command_runner.go:130] ! I0610 12:30:58.610262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0610 12:32:08.479368    8536 command_runner.go:130] ! W0610 12:30:58.610275       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0610 12:32:08.479368    8536 command_runner.go:130] ! W0610 12:30:58.610281       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479368    8536 command_runner.go:130] ! I0610 12:30:58.619816       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0610 12:32:08.479368    8536 command_runner.go:130] ! W0610 12:30:58.619856       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0610 12:32:08.479435    8536 command_runner.go:130] ! W0610 12:30:58.619866       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0610 12:32:08.479435    8536 command_runner.go:130] ! I0610 12:30:58.627044       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0610 12:32:08.479435    8536 command_runner.go:130] ! W0610 12:30:58.627092       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479519    8536 command_runner.go:130] ! W0610 12:30:58.627296       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479519    8536 command_runner.go:130] ! I0610 12:30:58.629017       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0610 12:32:08.479519    8536 command_runner.go:130] ! W0610 12:30:58.629067       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479519    8536 command_runner.go:130] ! I0610 12:30:58.659122       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0610 12:32:08.479582    8536 command_runner.go:130] ! W0610 12:30:58.659244       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479582    8536 command_runner.go:130] ! I0610 12:30:59.341469       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:08.479582    8536 command_runner.go:130] ! I0610 12:30:59.341814       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:08.479639    8536 command_runner.go:130] ! I0610 12:30:59.341806       1 secure_serving.go:213] Serving securely on [::]:8443
	I0610 12:32:08.479639    8536 command_runner.go:130] ! I0610 12:30:59.342486       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0610 12:32:08.479720    8536 command_runner.go:130] ! I0610 12:30:59.342867       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0610 12:32:08.479720    8536 command_runner.go:130] ! I0610 12:30:59.342901       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0610 12:32:08.479777    8536 command_runner.go:130] ! I0610 12:30:59.342987       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0610 12:32:08.479777    8536 command_runner.go:130] ! I0610 12:30:59.341865       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:08.479777    8536 command_runner.go:130] ! I0610 12:30:59.344865       1 controller.go:116] Starting legacy_token_tracking_controller
	I0610 12:32:08.479777    8536 command_runner.go:130] ! I0610 12:30:59.344899       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0610 12:32:08.479841    8536 command_runner.go:130] ! I0610 12:30:59.346737       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0610 12:32:08.479841    8536 command_runner.go:130] ! I0610 12:30:59.346910       1 available_controller.go:423] Starting AvailableConditionController
	I0610 12:32:08.479841    8536 command_runner.go:130] ! I0610 12:30:59.346960       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0610 12:32:08.479897    8536 command_runner.go:130] ! I0610 12:30:59.347078       1 aggregator.go:163] waiting for initial CRD sync...
	I0610 12:32:08.479897    8536 command_runner.go:130] ! I0610 12:30:59.347170       1 controller.go:78] Starting OpenAPI AggregationController
	I0610 12:32:08.479897    8536 command_runner.go:130] ! I0610 12:30:59.347256       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0610 12:32:08.479971    8536 command_runner.go:130] ! I0610 12:30:59.347656       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0610 12:32:08.479971    8536 command_runner.go:130] ! I0610 12:30:59.347947       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0610 12:32:08.479971    8536 command_runner.go:130] ! I0610 12:30:59.348233       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0610 12:32:08.479971    8536 command_runner.go:130] ! I0610 12:30:59.348295       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0610 12:32:08.480047    8536 command_runner.go:130] ! I0610 12:30:59.341877       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0610 12:32:08.480047    8536 command_runner.go:130] ! I0610 12:30:59.377996       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0610 12:32:08.480047    8536 command_runner.go:130] ! I0610 12:30:59.378109       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0610 12:32:08.480104    8536 command_runner.go:130] ! I0610 12:30:59.378362       1 controller.go:139] Starting OpenAPI controller
	I0610 12:32:08.480104    8536 command_runner.go:130] ! I0610 12:30:59.378742       1 controller.go:87] Starting OpenAPI V3 controller
	I0610 12:32:08.480153    8536 command_runner.go:130] ! I0610 12:30:59.378883       1 naming_controller.go:291] Starting NamingConditionController
	I0610 12:32:08.480153    8536 command_runner.go:130] ! I0610 12:30:59.379043       1 establishing_controller.go:76] Starting EstablishingController
	I0610 12:32:08.480178    8536 command_runner.go:130] ! I0610 12:30:59.379247       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 12:32:08.480178    8536 command_runner.go:130] ! I0610 12:30:59.379438       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 12:32:08.480223    8536 command_runner.go:130] ! I0610 12:30:59.379518       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 12:32:08.480223    8536 command_runner.go:130] ! I0610 12:30:59.379777       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:08.480261    8536 command_runner.go:130] ! I0610 12:30:59.379999       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:08.480289    8536 command_runner.go:130] ! I0610 12:30:59.524664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:08.480289    8536 command_runner.go:130] ! I0610 12:30:59.525326       1 policy_source.go:224] refreshing policies
	I0610 12:32:08.480341    8536 command_runner.go:130] ! I0610 12:30:59.543486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 12:32:08.480341    8536 command_runner.go:130] ! I0610 12:30:59.547084       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 12:32:08.480341    8536 command_runner.go:130] ! I0610 12:30:59.548579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 12:32:08.480341    8536 command_runner.go:130] ! I0610 12:30:59.549972       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 12:32:08.480415    8536 command_runner.go:130] ! I0610 12:30:59.550011       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 12:32:08.480415    8536 command_runner.go:130] ! I0610 12:30:59.551151       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 12:32:08.480415    8536 command_runner.go:130] ! I0610 12:30:59.554229       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 12:32:08.480415    8536 command_runner.go:130] ! I0610 12:30:59.560228       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 12:32:08.480479    8536 command_runner.go:130] ! I0610 12:30:59.578343       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 12:32:08.480479    8536 command_runner.go:130] ! I0610 12:30:59.578414       1 aggregator.go:165] initial CRD sync complete...
	I0610 12:32:08.480479    8536 command_runner.go:130] ! I0610 12:30:59.578429       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 12:32:08.480479    8536 command_runner.go:130] ! I0610 12:30:59.578437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 12:32:08.480545    8536 command_runner.go:130] ! I0610 12:30:59.578466       1 cache.go:39] Caches are synced for autoregister controller
	I0610 12:32:08.480545    8536 command_runner.go:130] ! I0610 12:30:59.606740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 12:32:08.480545    8536 command_runner.go:130] ! I0610 12:31:00.360768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 12:32:08.480545    8536 command_runner.go:130] ! W0610 12:31:00.893787       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.150.144]
	I0610 12:32:08.480609    8536 command_runner.go:130] ! I0610 12:31:00.913283       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:32:08.480609    8536 command_runner.go:130] ! I0610 12:31:00.933946       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:32:08.480665    8536 command_runner.go:130] ! I0610 12:31:02.471259       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:32:08.480665    8536 command_runner.go:130] ! I0610 12:31:02.690867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 12:32:08.480665    8536 command_runner.go:130] ! I0610 12:31:02.714405       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:32:08.480665    8536 command_runner.go:130] ! I0610 12:31:02.840117       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 12:32:08.480728    8536 command_runner.go:130] ! I0610 12:31:02.856715       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 12:32:08.490090    8536 logs.go:123] Gathering logs for etcd [877ee07c1499] ...
	I0610 12:32:08.490090    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877ee07c1499"
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.207374Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208407Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.150.144:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.150.144:2380","--initial-cluster=multinode-813300=https://172.17.150.144:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.150.144:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.150.144:2380","--name=multinode-813300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208499Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.208577Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208593Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.150.144:2380"]}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208715Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.218326Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"]}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.22047Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-813300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"ini
tial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.244201Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.944438ms"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.274404Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.303075Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","commit-index":1913}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=()"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became follower at term 2"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8f4442f54c46fb8d [peers: [], term: 2, commit: 1913, applied: 0, lastindex: 1913, lastterm: 2]"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.318917Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.323726Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1273}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.328272Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1642}
	I0610 12:32:08.518372    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.335671Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0610 12:32:08.518422    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.347777Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8f4442f54c46fb8d","timeout":"7s"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.349755Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8f4442f54c46fb8d"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.350228Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8f4442f54c46fb8d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.352715Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.36067Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361057Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361302Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0610 12:32:08.518754    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=(10323449867154160525)"}
	I0610 12:32:08.518754    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363612Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","added-peer-id":"8f4442f54c46fb8d","added-peer-peer-urls":["https://172.17.159.171:2380"]}
	I0610 12:32:08.518754    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364067Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	I0610 12:32:08.518834    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0610 12:32:08.518875    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.367772Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:08.518971    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.373962Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.150.144:2380"}
	I0610 12:32:08.518971    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.374209Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.150.144:2380"}
	I0610 12:32:08.519017    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375497Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8f4442f54c46fb8d","initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0610 12:32:08.519058    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375805Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0610 12:32:08.519103    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d is starting a new election at term 2"}
	I0610 12:32:08.519143    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.50539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became pre-candidate at term 2"}
	I0610 12:32:08.519193    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgPreVoteResp from 8f4442f54c46fb8d at term 2"}
	I0610 12:32:08.519233    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became candidate at term 3"}
	I0610 12:32:08.519233    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgVoteResp from 8f4442f54c46fb8d at term 3"}
	I0610 12:32:08.519279    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 3"}
	I0610 12:32:08.519279    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 3"}
	I0610 12:32:08.519318    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.511486Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.150.144:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	I0610 12:32:08.519362    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:08.519362    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:08.519401    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.517481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0610 12:32:08.519446    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0610 12:32:08.519486    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520973Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0610 12:32:08.519486    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.543402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.150.144:2379"}
	I0610 12:32:08.528756    8536 logs.go:123] Gathering logs for coredns [24f3f7e041f9] ...
	I0610 12:32:08.528756    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f3f7e041f9"
	I0610 12:32:08.557354    8536 command_runner.go:130] > .:53
	I0610 12:32:08.557354    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:08.557354    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:08.558174    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:08.558174    8536 command_runner.go:130] > [INFO] 127.0.0.1:34387 - 41508 "HINFO IN 7171992165040069679.5605173313288368349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051230172s
	I0610 12:32:08.559962    8536 logs.go:123] Gathering logs for kube-scheduler [d90e72ef4670] ...
	I0610 12:32:08.560013    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90e72ef4670"
	I0610 12:32:08.589079    8536 command_runner.go:130] ! I0610 12:30:56.811878       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:08.589079    8536 command_runner.go:130] ! W0610 12:30:59.481898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:08.589617    8536 command_runner.go:130] ! W0610 12:30:59.482123       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.589617    8536 command_runner.go:130] ! W0610 12:30:59.482217       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:08.589617    8536 command_runner.go:130] ! W0610 12:30:59.482255       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.514164       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.514266       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.518405       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.518496       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.518958       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.519337       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:08.589894    8536 command_runner.go:130] ! I0610 12:30:59.619122       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:08.592762    8536 logs.go:123] Gathering logs for kubelet ...
	I0610 12:32:08.592854    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:48 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322075    1392 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322142    1392 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.324143    1392 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: E0610 12:30:49.325228    1392 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:08.625543    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:08.625543    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:08.625602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:08.625602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625675    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078361    1448 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:08.625675    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078445    1448 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.625718    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078696    1448 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:08.625718    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: E0610 12:30:50.078819    1448 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:08.625718    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:08.625783    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:08.625832    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625832    8536 command_runner.go:130] > Jun 10 12:30:53 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625878    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021338    1528 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:08.625878    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021853    1528 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.625942    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.022286    1528 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:08.625978    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.024650    1528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0610 12:32:08.626018    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.040752    1528 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:08.626018    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.082883    1528 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0610 12:32:08.626053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.083180    1528 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0610 12:32:08.626092    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085143    1528 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0610 12:32:08.626166    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085256    1528 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-813300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0610 12:32:08.626218    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.086924    1528 topology_manager.go:138] "Creating topology manager with none policy"
	I0610 12:32:08.626218    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.087122    1528 container_manager_linux.go:301] "Creating device plugin manager"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.088486    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.090915    1528 kubelet.go:400] "Attempting to sync node with API server"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091108    1528 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091402    1528 kubelet.go:312] "Adding apiserver pod source"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.092259    1528 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.097253    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.097520    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.099693    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.099740    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.099843    1528 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.102710    1528 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.103981    1528 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.107194    1528 server.go:1264] "Started kubelet"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.120692    1528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.122088    1528 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.125028    1528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.128857    1528 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.132449    1528 server.go:455] "Adding debug handlers to kubelet server"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.124281    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.137444    1528 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.139221    1528 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.141909    1528 factory.go:221] Registration of the systemd container factory successfully
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147241    1528 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0610 12:32:08.626809    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147375    1528 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0610 12:32:08.626877    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.144942    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="200ms"
	I0610 12:32:08.626877    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.143108    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.627057    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.154145    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.627057    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.179909    1528 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0610 12:32:08.627110    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180022    1528 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0610 12:32:08.627110    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180086    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:08.627149    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181162    1528 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0610 12:32:08.627149    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181233    1528 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0610 12:32:08.627191    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181261    1528 policy_none.go:49] "None policy: Start"
	I0610 12:32:08.627191    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.192385    1528 reconciler.go:26] "Reconciler: start to sync state"
	I0610 12:32:08.627231    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193179    1528 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0610 12:32:08.627231    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193256    1528 state_mem.go:35] "Initializing new in-memory state store"
	I0610 12:32:08.627308    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193830    1528 state_mem.go:75] "Updated machine memory state"
	I0610 12:32:08.627343    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.197194    1528 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0610 12:32:08.627343    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.204265    1528 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0610 12:32:08.627386    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.219894    1528 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0610 12:32:08.627386    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.226098    1528 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-813300\" not found"
	I0610 12:32:08.627421    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.226649    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0610 12:32:08.627490    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.230123    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0610 12:32:08.627490    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231021    1528 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0610 12:32:08.627490    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231133    1528 kubelet.go:2337] "Starting kubelet main sync loop"
	I0610 12:32:08.627534    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.231189    1528 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0610 12:32:08.627534    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.244084    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.247037    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.247227    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.253607    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.255809    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:08.627733    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:08.627733    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:08.627733    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:08.627815    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334683    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62db1c721951a36c62a6369a30c651a661eb2871f8363fa341ef8ad7b7080a07"
	I0610 12:32:08.627862    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334742    1528 topology_manager.go:215] "Topology Admit Handler" podUID="180cf4cc399d604c28cc4df1442ebd5a" podNamespace="kube-system" podName="kube-apiserver-multinode-813300"
	I0610 12:32:08.627862    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.336338    1528 topology_manager.go:215] "Topology Admit Handler" podUID="37865ce1914dc04a4a0a25e98b80ce35" podNamespace="kube-system" podName="kube-controller-manager-multinode-813300"
	I0610 12:32:08.627912    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.338106    1528 topology_manager.go:215] "Topology Admit Handler" podUID="4d9c84710aef19c4449f4b7691d0af07" podNamespace="kube-system" podName="kube-scheduler-multinode-813300"
	I0610 12:32:08.627912    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340794    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7d28a97ba1c48cbe8edd3eab76f64cdcdebf920a03921644f63d12856b642f0"
	I0610 12:32:08.627972    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340848    1528 topology_manager.go:215] "Topology Admit Handler" podUID="76e8893277ba7cea6624561880496e47" podNamespace="kube-system" podName="etcd-multinode-813300"
	I0610 12:32:08.627972    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.341927    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04d7b3d4fcc648cd6b447a383defba86200f1071acc892670457ebeebb52f22"
	I0610 12:32:08.627972    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.342208    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bc6043f7b92f091f4ceee7db3e11617072391c6e5303f4ecdafdb06d4b585a"
	I0610 12:32:08.628049    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.356667    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="400ms"
	I0610 12:32:08.628049    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.365771    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c"
	I0610 12:32:08.628109    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.380268    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b6aa9a0e1d1cbcee858808fc74f396cfba20777f2316093484920397e9b4ca"
	I0610 12:32:08.628153    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628226    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397846    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-ca-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:08.628267    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397877    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:08.628307    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397922    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-flexvolume-dir\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628363    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397961    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-k8s-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628363    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397979    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-kubeconfig\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628445    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398000    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-data\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:08.628486    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398019    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-k8s-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:08.628538    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398038    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-ca-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628538    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398055    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d9c84710aef19c4449f4b7691d0af07-kubeconfig\") pod \"kube-scheduler-multinode-813300\" (UID: \"4d9c84710aef19c4449f4b7691d0af07\") " pod="kube-system/kube-scheduler-multinode-813300"
	I0610 12:32:08.628606    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398073    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-certs\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:08.628663    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.400870    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d"
	I0610 12:32:08.628692    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.416196    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689b8976cc0293bf6ae2ffaf7abbe0a59cfa7521907fd652e86da3912515d25d"
	I0610 12:32:08.628767    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.442360    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10e49596de5e51f9986bebf2105f07084a083e5e8c2ab50684531210b032662"
	I0610 12:32:08.628799    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.454932    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.456598    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.759421    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="800ms"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.858477    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.859580    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.205231    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.205310    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.248476    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.249836    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.406658    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.406731    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.487592    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.561164    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="1.6s"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.661352    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.663943    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.751130    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.751205    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.629409    8536 command_runner.go:130] > Jun 10 12:30:56 multinode-813300 kubelet[1528]: E0610 12:30:56.215699    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:08.629531    8536 command_runner.go:130] > Jun 10 12:30:57 multinode-813300 kubelet[1528]: I0610 12:30:57.265569    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.629531    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636898    1528 kubelet_node_status.go:112] "Node was previously registered" node="multinode-813300"
	I0610 12:32:08.629531    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636993    1528 kubelet_node_status.go:76] "Successfully registered node" node="multinode-813300"
	I0610 12:32:08.629592    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.638685    1528 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0610 12:32:08.629728    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639257    1528 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0610 12:32:08.629796    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639985    1528 setters.go:580] "Node became not ready" node="multinode-813300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-10T12:30:59Z","lastTransitionTime":"2024-06-10T12:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0610 12:32:08.629832    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.103240    1528 apiserver.go:52] "Watching apiserver"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109200    1528 topology_manager.go:215] "Topology Admit Handler" podUID="40bf0aff-00b2-40c7-bed7-52b8cadbc3a1" podNamespace="kube-system" podName="kube-proxy-nrpvt"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109472    1528 topology_manager.go:215] "Topology Admit Handler" podUID="aad8124e-6c05-4719-9adb-edc11b3cce42" podNamespace="kube-system" podName="kindnet-29gbv"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109721    1528 topology_manager.go:215] "Topology Admit Handler" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kbhvv"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109954    1528 topology_manager.go:215] "Topology Admit Handler" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e" podNamespace="kube-system" podName="storage-provisioner"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110077    1528 topology_manager.go:215] "Topology Admit Handler" podUID="3191c71a-8c87-4390-8232-8653f494d1f0" podNamespace="default" podName="busybox-fc5497c4f-z28tq"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.110308    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110641    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-813300" podUID="f824b391-b3d2-49ec-ba7d-863cb2150f81"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.111896    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.115871    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.147565    1528 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.155423    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160314    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6dfedc3-d6ff-412c-8a13-40a493c4199e-tmp\") pod \"storage-provisioner\" (UID: \"f6dfedc3-d6ff-412c-8a13-40a493c4199e\") " pod="kube-system/storage-provisioner"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160428    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-cni-cfg\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-xtables-lock\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161224    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-xtables-lock\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161359    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-lib-modules\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:08.630466    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162089    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.630515    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162182    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.662151031 +0000 UTC m=+6.753273950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.630595    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.162238    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-lib-modules\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:08.630595    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.175000    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-813300"
	I0610 12:32:08.630649    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.186991    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630649    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187290    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630750    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.687498638 +0000 UTC m=+6.778621457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.246331    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f80d01e953cc664fc05c397fdad000" path="/var/lib/kubelet/pods/93f80d01e953cc664fc05c397fdad000/volumes"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.248399    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa7bd9cfb361baaed8d7d5729a6c77c" path="/var/lib/kubelet/pods/baa7bd9cfb361baaed8d7d5729a6c77c/volumes"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.316426    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-813300" podStartSLOduration=0.316407314 podStartE2EDuration="316.407314ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.316147208 +0000 UTC m=+6.407270027" watchObservedRunningTime="2024-06-10 12:31:00.316407314 +0000 UTC m=+6.407530233"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.439081    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-813300" podStartSLOduration=0.439018164 podStartE2EDuration="439.018164ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.409703778 +0000 UTC m=+6.500826597" watchObservedRunningTime="2024-06-10 12:31:00.439018164 +0000 UTC m=+6.530141083"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.631684    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667882    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667966    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.667947638 +0000 UTC m=+7.759070557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769226    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769334    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769428    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.769408565 +0000 UTC m=+7.860531384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.231939    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.679952    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.680142    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.680120563 +0000 UTC m=+9.771243482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.781772    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782050    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782132    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.7821123 +0000 UTC m=+9.873235219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631364    8536 command_runner.go:130] > Jun 10 12:31:02 multinode-813300 kubelet[1528]: E0610 12:31:02.234039    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.631419    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.232296    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.631419    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.701884    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.631529    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.702058    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.702037863 +0000 UTC m=+13.793160782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.631529    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802160    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631570    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802233    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631731    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802292    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.802272966 +0000 UTC m=+13.893395785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.207349    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.238069    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:05 multinode-813300 kubelet[1528]: E0610 12:31:05.232753    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:06 multinode-813300 kubelet[1528]: E0610 12:31:06.233804    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.231988    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736592    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736825    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.736801176 +0000 UTC m=+21.827923995 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837037    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837146    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.632591    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837219    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.837199504 +0000 UTC m=+21.928322423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.632695    8536 command_runner.go:130] > Jun 10 12:31:08 multinode-813300 kubelet[1528]: E0610 12:31:08.232310    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.632695    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.208416    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.632695    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.231620    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.632774    8536 command_runner.go:130] > Jun 10 12:31:10 multinode-813300 kubelet[1528]: E0610 12:31:10.233882    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.632854    8536 command_runner.go:130] > Jun 10 12:31:11 multinode-813300 kubelet[1528]: E0610 12:31:11.232126    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.632933    8536 command_runner.go:130] > Jun 10 12:31:12 multinode-813300 kubelet[1528]: E0610 12:31:12.233695    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633011    8536 command_runner.go:130] > Jun 10 12:31:13 multinode-813300 kubelet[1528]: E0610 12:31:13.231660    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633136    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.210433    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.633204    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.234870    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633204    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.232790    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633204    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816637    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.633344    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816990    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.816931565 +0000 UTC m=+37.908054384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.633375    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918429    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.633405    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918619    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.633450    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918694    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.918675278 +0000 UTC m=+38.009798097 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.633538    8536 command_runner.go:130] > Jun 10 12:31:16 multinode-813300 kubelet[1528]: E0610 12:31:16.234954    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633538    8536 command_runner.go:130] > Jun 10 12:31:17 multinode-813300 kubelet[1528]: E0610 12:31:17.231668    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633620    8536 command_runner.go:130] > Jun 10 12:31:18 multinode-813300 kubelet[1528]: E0610 12:31:18.232656    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633620    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.214153    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.633697    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.231639    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633775    8536 command_runner.go:130] > Jun 10 12:31:20 multinode-813300 kubelet[1528]: E0610 12:31:20.234429    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633775    8536 command_runner.go:130] > Jun 10 12:31:21 multinode-813300 kubelet[1528]: E0610 12:31:21.232080    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633854    8536 command_runner.go:130] > Jun 10 12:31:22 multinode-813300 kubelet[1528]: E0610 12:31:22.232638    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633854    8536 command_runner.go:130] > Jun 10 12:31:23 multinode-813300 kubelet[1528]: E0610 12:31:23.233105    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633932    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.216593    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.633932    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.233280    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634011    8536 command_runner.go:130] > Jun 10 12:31:25 multinode-813300 kubelet[1528]: E0610 12:31:25.232513    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634089    8536 command_runner.go:130] > Jun 10 12:31:26 multinode-813300 kubelet[1528]: E0610 12:31:26.232337    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:27 multinode-813300 kubelet[1528]: E0610 12:31:27.233152    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:28 multinode-813300 kubelet[1528]: E0610 12:31:28.234103    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.218816    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.232070    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:30 multinode-813300 kubelet[1528]: E0610 12:31:30.231766    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.231673    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884791    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884975    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.884956587 +0000 UTC m=+69.976079506 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.634691    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985181    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.634691    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985216    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.634691    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.98525598 +0000 UTC m=+70.076378799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.634691    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.232018    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634837    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.476305    1528 scope.go:117] "RemoveContainer" containerID="d32ce22e31b06bacb7530f3513c1f864d77685269868404ad7c71a4f15d91e41"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.477175    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.477659    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f6dfedc3-d6ff-412c-8a13-40a493c4199e)\"" pod="kube-system/storage-provisioner" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:33 multinode-813300 kubelet[1528]: E0610 12:31:33.232631    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 kubelet[1528]: I0610 12:31:47.231895    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.214930    1528 scope.go:117] "RemoveContainer" containerID="34b9299d74e382eadb8e7df1029506efc87e283ac8b38024d9524b8bb815f705"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: E0610 12:31:54.266189    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.275663    1528 scope.go:117] "RemoveContainer" containerID="ba52603f8387590319a4d5a9511265065e2f90bff6628bec2f622754e034c70a"
	I0610 12:32:08.678891    8536 logs.go:123] Gathering logs for coredns [f2e39052db19] ...
	I0610 12:32:08.678891    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e39052db19"
	I0610 12:32:08.714741    8536 command_runner.go:130] > .:53
	I0610 12:32:08.715066    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:08.715137    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:08.715137    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:08.715183    8536 command_runner.go:130] > [INFO] 127.0.0.1:46276 - 35337 "HINFO IN 965239639799927989.3587586823131848737. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.052340371s
	I0610 12:32:08.715264    8536 command_runner.go:130] > [INFO] 10.244.1.2:36040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003047s
	I0610 12:32:08.715264    8536 command_runner.go:130] > [INFO] 10.244.1.2:51901 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.165635405s
	I0610 12:32:08.715264    8536 command_runner.go:130] > [INFO] 10.244.1.2:38890 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.065664181s
	I0610 12:32:08.715264    8536 command_runner.go:130] > [INFO] 10.244.1.2:40219 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.107303974s
	I0610 12:32:08.715347    8536 command_runner.go:130] > [INFO] 10.244.0.3:38184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002396s
	I0610 12:32:08.715347    8536 command_runner.go:130] > [INFO] 10.244.0.3:57966 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001307s
	I0610 12:32:08.715408    8536 command_runner.go:130] > [INFO] 10.244.0.3:38338 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002131s
	I0610 12:32:08.715408    8536 command_runner.go:130] > [INFO] 10.244.0.3:41898 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000121s
	I0610 12:32:08.715487    8536 command_runner.go:130] > [INFO] 10.244.1.2:49043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200101s
	I0610 12:32:08.715544    8536 command_runner.go:130] > [INFO] 10.244.1.2:53918 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.147842886s
	I0610 12:32:08.715618    8536 command_runner.go:130] > [INFO] 10.244.1.2:50531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001726s
	I0610 12:32:08.715679    8536 command_runner.go:130] > [INFO] 10.244.1.2:41881 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001246s
	I0610 12:32:08.715727    8536 command_runner.go:130] > [INFO] 10.244.1.2:34708 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.030026838s
	I0610 12:32:08.715727    8536 command_runner.go:130] > [INFO] 10.244.1.2:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002834s
	I0610 12:32:08.715786    8536 command_runner.go:130] > [INFO] 10.244.1.2:58166 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001901s
	I0610 12:32:08.715786    8536 command_runner.go:130] > [INFO] 10.244.1.2:46174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001048s
	I0610 12:32:08.715786    8536 command_runner.go:130] > [INFO] 10.244.0.3:52212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003513s
	I0610 12:32:08.715864    8536 command_runner.go:130] > [INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	I0610 12:32:08.715926    8536 command_runner.go:130] > [INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	I0610 12:32:08.715926    8536 command_runner.go:130] > [INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	I0610 12:32:08.715991    8536 command_runner.go:130] > [INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	I0610 12:32:08.715991    8536 command_runner.go:130] > [INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	I0610 12:32:08.715991    8536 command_runner.go:130] > [INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	I0610 12:32:08.716052    8536 command_runner.go:130] > [INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	I0610 12:32:08.716052    8536 command_runner.go:130] > [INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	I0610 12:32:08.716052    8536 command_runner.go:130] > [INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	I0610 12:32:08.716129    8536 command_runner.go:130] > [INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	I0610 12:32:08.716129    8536 command_runner.go:130] > [INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	I0610 12:32:08.716129    8536 command_runner.go:130] > [INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	I0610 12:32:08.716190    8536 command_runner.go:130] > [INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	I0610 12:32:08.716190    8536 command_runner.go:130] > [INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	I0610 12:32:08.716256    8536 command_runner.go:130] > [INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	I0610 12:32:08.716256    8536 command_runner.go:130] > [INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	I0610 12:32:08.716316    8536 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	I0610 12:32:08.716316    8536 command_runner.go:130] > [INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	I0610 12:32:08.716395    8536 command_runner.go:130] > [INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	I0610 12:32:08.716395    8536 command_runner.go:130] > [INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	I0610 12:32:08.716444    8536 command_runner.go:130] > [INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	I0610 12:32:08.716528    8536 command_runner.go:130] > [INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	I0610 12:32:08.716568    8536 command_runner.go:130] > [INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	I0610 12:32:08.716568    8536 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0610 12:32:08.716617    8536 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0610 12:32:08.719098    8536 logs.go:123] Gathering logs for kube-controller-manager [f1409bf44ff1] ...
	I0610 12:32:08.719685    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1409bf44ff1"
	I0610 12:32:08.762120    8536 command_runner.go:130] ! I0610 12:07:55.502430       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:08.762559    8536 command_runner.go:130] ! I0610 12:07:56.114557       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:08.762619    8536 command_runner.go:130] ! I0610 12:07:56.114858       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.762687    8536 command_runner.go:130] ! I0610 12:07:56.117078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:08.762687    8536 command_runner.go:130] ! I0610 12:07:56.117365       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:08.762793    8536 command_runner.go:130] ! I0610 12:07:56.118392       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:08.762793    8536 command_runner.go:130] ! I0610 12:07:56.118623       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:08.762865    8536 command_runner.go:130] ! I0610 12:08:00.413505       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:08.762865    8536 command_runner.go:130] ! I0610 12:08:00.413532       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:08.762932    8536 command_runner.go:130] ! I0610 12:08:00.454038       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:08.762932    8536 command_runner.go:130] ! I0610 12:08:00.454303       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:08.763041    8536 command_runner.go:130] ! I0610 12:08:00.454341       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:08.763041    8536 command_runner.go:130] ! I0610 12:08:00.474947       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:08.763113    8536 command_runner.go:130] ! I0610 12:08:00.475105       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:08.763113    8536 command_runner.go:130] ! I0610 12:08:00.475116       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:08.763113    8536 command_runner.go:130] ! I0610 12:08:00.514703       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:08.763172    8536 command_runner.go:130] ! I0610 12:08:10.509914       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.510020       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.511115       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.511148       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.515475       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.515547       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.516308       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.516334       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.516340       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.531416       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.531434       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.531293       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.543960       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.544630       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.544667       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.567000       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.567602       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.568240       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.586627       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.587637       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.587654       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.623685       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.623975       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.624342       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.639985       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.640617       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.640810       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.702326       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.706246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.711937       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:08.763776    8536 command_runner.go:130] ! I0610 12:08:10.712131       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:08.763776    8536 command_runner.go:130] ! I0610 12:08:10.712146       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:08.763840    8536 command_runner.go:130] ! I0610 12:08:10.712235       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:08.763840    8536 command_runner.go:130] ! I0610 12:08:10.712265       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:08.763917    8536 command_runner.go:130] ! I0610 12:08:10.724980       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:08.763971    8536 command_runner.go:130] ! I0610 12:08:10.726393       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:08.764020    8536 command_runner.go:130] ! I0610 12:08:10.726653       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:08.764086    8536 command_runner.go:130] ! I0610 12:08:10.742390       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:08.764134    8536 command_runner.go:130] ! I0610 12:08:10.743099       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:08.764195    8536 command_runner.go:130] ! I0610 12:08:10.744498       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:08.764195    8536 command_runner.go:130] ! I0610 12:08:10.759177       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:08.764262    8536 command_runner.go:130] ! I0610 12:08:10.759262       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:08.764262    8536 command_runner.go:130] ! I0610 12:08:10.759917       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:08.764323    8536 command_runner.go:130] ! I0610 12:08:10.759932       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:08.764389    8536 command_runner.go:130] ! I0610 12:08:10.901245       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:08.764451    8536 command_runner.go:130] ! I0610 12:08:10.903470       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:08.764451    8536 command_runner.go:130] ! I0610 12:08:10.903502       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:08.764516    8536 command_runner.go:130] ! I0610 12:08:11.064066       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:08.764576    8536 command_runner.go:130] ! I0610 12:08:11.064123       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:08.764576    8536 command_runner.go:130] ! I0610 12:08:11.064135       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:08.764642    8536 command_runner.go:130] ! I0610 12:08:11.202164       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:08.764705    8536 command_runner.go:130] ! I0610 12:08:11.202227       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:08.764769    8536 command_runner.go:130] ! I0610 12:08:11.202239       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:08.764834    8536 command_runner.go:130] ! I0610 12:08:11.352380       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:08.764892    8536 command_runner.go:130] ! I0610 12:08:11.352546       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:08.764941    8536 command_runner.go:130] ! I0610 12:08:11.352575       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:08.765000    8536 command_runner.go:130] ! I0610 12:08:11.656918       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:08.765000    8536 command_runner.go:130] ! I0610 12:08:11.657560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:08.765070    8536 command_runner.go:130] ! I0610 12:08:11.657950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:08.765153    8536 command_runner.go:130] ! I0610 12:08:11.658269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:08.765153    8536 command_runner.go:130] ! I0610 12:08:11.658437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:08.765262    8536 command_runner.go:130] ! I0610 12:08:11.658699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:08.765345    8536 command_runner.go:130] ! I0610 12:08:11.658785       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:08.765345    8536 command_runner.go:130] ! I0610 12:08:11.658822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:08.765405    8536 command_runner.go:130] ! I0610 12:08:11.658849       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:08.765530    8536 command_runner.go:130] ! I0610 12:08:11.658870       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:08.765530    8536 command_runner.go:130] ! I0610 12:08:11.658895       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:08.765610    8536 command_runner.go:130] ! I0610 12:08:11.658915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:08.765666    8536 command_runner.go:130] ! I0610 12:08:11.658950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:08.765800    8536 command_runner.go:130] ! I0610 12:08:11.658987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:08.765855    8536 command_runner.go:130] ! I0610 12:08:11.659004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:08.765855    8536 command_runner.go:130] ! I0610 12:08:11.659056       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:08.765938    8536 command_runner.go:130] ! W0610 12:08:11.659073       1 shared_informer.go:597] resyncPeriod 13h6m28.341601393s is smaller than resyncCheckPeriod 19h0m49.916968618s and the informer has already started. Changing it to 19h0m49.916968618s
	I0610 12:32:08.766005    8536 command_runner.go:130] ! I0610 12:08:11.659195       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:08.766094    8536 command_runner.go:130] ! I0610 12:08:11.659214       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:08.766153    8536 command_runner.go:130] ! I0610 12:08:11.659236       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:08.766206    8536 command_runner.go:130] ! I0610 12:08:11.659287       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:08.766255    8536 command_runner.go:130] ! I0610 12:08:11.659312       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:08.766315    8536 command_runner.go:130] ! I0610 12:08:11.659579       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:08.766359    8536 command_runner.go:130] ! I0610 12:08:11.659591       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:08.766396    8536 command_runner.go:130] ! I0610 12:08:11.659608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:08.766396    8536 command_runner.go:130] ! I0610 12:08:11.895313       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:08.766462    8536 command_runner.go:130] ! I0610 12:08:11.895383       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:08.766462    8536 command_runner.go:130] ! I0610 12:08:11.895693       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:08.766552    8536 command_runner.go:130] ! I0610 12:08:11.896490       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:08.766620    8536 command_runner.go:130] ! I0610 12:08:12.154521       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:08.766668    8536 command_runner.go:130] ! I0610 12:08:12.154576       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:08.766729    8536 command_runner.go:130] ! I0610 12:08:12.154658       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:08.766729    8536 command_runner.go:130] ! I0610 12:08:12.154690       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:08.766806    8536 command_runner.go:130] ! I0610 12:08:12.301351       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:08.766854    8536 command_runner.go:130] ! I0610 12:08:12.301495       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:08.766854    8536 command_runner.go:130] ! I0610 12:08:12.301508       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:08.766920    8536 command_runner.go:130] ! I0610 12:08:12.495309       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:08.766995    8536 command_runner.go:130] ! I0610 12:08:12.495425       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:08.766995    8536 command_runner.go:130] ! I0610 12:08:12.495645       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:08.766995    8536 command_runner.go:130] ! I0610 12:08:12.495683       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:08.767070    8536 command_runner.go:130] ! E0610 12:08:12.550245       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:08.767124    8536 command_runner.go:130] ! I0610 12:08:12.550671       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:08.767202    8536 command_runner.go:130] ! E0610 12:08:12.700493       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:08.767267    8536 command_runner.go:130] ! I0610 12:08:12.700528       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:08.767368    8536 command_runner.go:130] ! I0610 12:08:12.700538       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:08.767426    8536 command_runner.go:130] ! I0610 12:08:12.859280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:08.767426    8536 command_runner.go:130] ! I0610 12:08:12.859580       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:08.767490    8536 command_runner.go:130] ! I0610 12:08:12.859953       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:08.767547    8536 command_runner.go:130] ! I0610 12:08:12.906626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:08.767547    8536 command_runner.go:130] ! I0610 12:08:12.907724       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:08.767547    8536 command_runner.go:130] ! I0610 12:08:13.050431       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:08.767611    8536 command_runner.go:130] ! I0610 12:08:13.050510       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:08.767687    8536 command_runner.go:130] ! I0610 12:08:13.205885       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:08.767745    8536 command_runner.go:130] ! I0610 12:08:13.205970       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.205982       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.351713       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.351815       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.351830       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.603420       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.603498       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.603510       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.750262       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.750789       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.750809       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.900118       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.900639       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.900897       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.054008       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.054156       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.054170       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.199527       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.199627       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.199683       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.199694       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.351474       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.352193       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.352213       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:08.768444    8536 command_runner.go:130] ! I0610 12:08:14.502148       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:08.768523    8536 command_runner.go:130] ! I0610 12:08:14.502250       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:08.768523    8536 command_runner.go:130] ! I0610 12:08:14.502262       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:08.768523    8536 command_runner.go:130] ! I0610 12:08:14.502696       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:08.768624    8536 command_runner.go:130] ! I0610 12:08:14.502825       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:08.768678    8536 command_runner.go:130] ! I0610 12:08:14.546684       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:08.768728    8536 command_runner.go:130] ! I0610 12:08:14.547077       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:08.768766    8536 command_runner.go:130] ! I0610 12:08:14.547608       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:08.768813    8536 command_runner.go:130] ! I0610 12:08:14.547097       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:08.768897    8536 command_runner.go:130] ! I0610 12:08:14.547127       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:08.768897    8536 command_runner.go:130] ! I0610 12:08:14.547931       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:08.768960    8536 command_runner.go:130] ! I0610 12:08:14.547138       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:08.769019    8536 command_runner.go:130] ! I0610 12:08:14.547188       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:08.769019    8536 command_runner.go:130] ! I0610 12:08:14.548434       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:08.769084    8536 command_runner.go:130] ! I0610 12:08:14.547199       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:08.769142    8536 command_runner.go:130] ! I0610 12:08:14.547257       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:08.769223    8536 command_runner.go:130] ! I0610 12:08:14.548692       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:08.769223    8536 command_runner.go:130] ! I0610 12:08:14.547265       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:08.769284    8536 command_runner.go:130] ! I0610 12:08:14.558628       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:08.769351    8536 command_runner.go:130] ! I0610 12:08:14.590023       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:08.769409    8536 command_runner.go:130] ! I0610 12:08:14.600506       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:08.769475    8536 command_runner.go:130] ! I0610 12:08:14.602694       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:08.769535    8536 command_runner.go:130] ! I0610 12:08:14.603324       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:08.769535    8536 command_runner.go:130] ! I0610 12:08:14.609611       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:08.769535    8536 command_runner.go:130] ! I0610 12:08:14.612038       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:08.769602    8536 command_runner.go:130] ! I0610 12:08:14.623629       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:08.769602    8536 command_runner.go:130] ! I0610 12:08:14.624495       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:08.769674    8536 command_runner.go:130] ! I0610 12:08:14.612329       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:08.769674    8536 command_runner.go:130] ! I0610 12:08:14.628289       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:08.769737    8536 command_runner.go:130] ! I0610 12:08:14.630516       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:08.769800    8536 command_runner.go:130] ! I0610 12:08:14.630648       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:08.769851    8536 command_runner.go:130] ! I0610 12:08:14.622860       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:08.769937    8536 command_runner.go:130] ! I0610 12:08:14.627541       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:08.769984    8536 command_runner.go:130] ! I0610 12:08:14.627554       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:08.770044    8536 command_runner.go:130] ! I0610 12:08:14.627562       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:08.770044    8536 command_runner.go:130] ! I0610 12:08:14.627813       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:08.770111    8536 command_runner.go:130] ! I0610 12:08:14.631141       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:08.770172    8536 command_runner.go:130] ! I0610 12:08:14.631364       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:08.770236    8536 command_runner.go:130] ! I0610 12:08:14.631669       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:08.770236    8536 command_runner.go:130] ! I0610 12:08:14.631834       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:08.770301    8536 command_runner.go:130] ! I0610 12:08:14.642451       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:08.770365    8536 command_runner.go:130] ! I0610 12:08:14.644828       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:08.770365    8536 command_runner.go:130] ! I0610 12:08:14.645380       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:08.770425    8536 command_runner.go:130] ! I0610 12:08:14.647678       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:08.770492    8536 command_runner.go:130] ! I0610 12:08:14.648798       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:08.770552    8536 command_runner.go:130] ! I0610 12:08:14.648809       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:08.770552    8536 command_runner.go:130] ! I0610 12:08:14.648848       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:08.770618    8536 command_runner.go:130] ! I0610 12:08:14.656075       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:08.770618    8536 command_runner.go:130] ! I0610 12:08:14.656781       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:08.770678    8536 command_runner.go:130] ! I0610 12:08:14.657449       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:08.770678    8536 command_runner.go:130] ! I0610 12:08:14.657643       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:08.770741    8536 command_runner.go:130] ! I0610 12:08:14.658125       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:08.770741    8536 command_runner.go:130] ! I0610 12:08:14.661079       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:08.770800    8536 command_runner.go:130] ! I0610 12:08:14.668926       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:08.770800    8536 command_runner.go:130] ! I0610 12:08:14.675620       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:08.770854    8536 command_runner.go:130] ! I0610 12:08:14.680953       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300" podCIDRs=["10.244.0.0/24"]
	I0610 12:32:08.770893    8536 command_runner.go:130] ! I0610 12:08:14.687842       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:08.770954    8536 command_runner.go:130] ! I0610 12:08:14.751377       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:08.770954    8536 command_runner.go:130] ! I0610 12:08:14.754827       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:08.771012    8536 command_runner.go:130] ! I0610 12:08:14.795731       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:08.771067    8536 command_runner.go:130] ! I0610 12:08:14.803976       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:08.771067    8536 command_runner.go:130] ! I0610 12:08:14.807376       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:08.771122    8536 command_runner.go:130] ! I0610 12:08:14.807800       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:08.771185    8536 command_runner.go:130] ! I0610 12:08:14.851108       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:08.771245    8536 command_runner.go:130] ! I0610 12:08:14.858915       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:08.771245    8536 command_runner.go:130] ! I0610 12:08:14.859692       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:14.864873       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:15.295934       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:15.296041       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:15.332772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:15.887603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="329.520484ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.024148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.478301ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.151441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.784808ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.151859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="288.402µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.577624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.03545ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.593339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.556101ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.593508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.3µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:30.535681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:30.566310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.4µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:32:08.771915    8536 command_runner.go:130] ! I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:32:08.771979    8536 command_runner.go:130] ! I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:32:08.771979    8536 command_runner.go:130] ! I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:32:08.771979    8536 command_runner.go:130] ! I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	I0610 12:32:08.772089    8536 command_runner.go:130] ! I0610 12:25:52.678571       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:25:52.681612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:25:52.698797       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m03" podCIDRs=["10.244.2.0/24"]
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:25:54.878967       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:26:13.380155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:27:44.944679       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:28:15.516170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.644756ms"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:28:15.516815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.1µs"
	I0610 12:32:08.790122    8536 logs.go:123] Gathering logs for kindnet [c39d54960e7d] ...
	I0610 12:32:08.790122    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39d54960e7d"
	I0610 12:32:08.834300    8536 command_runner.go:130] ! I0610 12:12:45.866152       1 main.go:227] handling current node
	I0610 12:32:08.835323    8536 command_runner.go:130] ! I0610 12:12:45.866170       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835323    8536 command_runner.go:130] ! I0610 12:12:45.866178       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:12:55.883210       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:12:55.883426       1 main.go:227] handling current node
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:12:55.883562       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:12:55.883686       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:13:05.893577       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835505    8536 command_runner.go:130] ! I0610 12:13:05.893734       1 main.go:227] handling current node
	I0610 12:32:08.835505    8536 command_runner.go:130] ! I0610 12:13:05.893787       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835564    8536 command_runner.go:130] ! I0610 12:13:05.893797       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835564    8536 command_runner.go:130] ! I0610 12:13:15.902454       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835652    8536 command_runner.go:130] ! I0610 12:13:15.902590       1 main.go:227] handling current node
	I0610 12:32:08.835652    8536 command_runner.go:130] ! I0610 12:13:15.902606       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835717    8536 command_runner.go:130] ! I0610 12:13:15.902614       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835717    8536 command_runner.go:130] ! I0610 12:13:25.917172       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835779    8536 command_runner.go:130] ! I0610 12:13:25.917277       1 main.go:227] handling current node
	I0610 12:32:08.835779    8536 command_runner.go:130] ! I0610 12:13:25.917297       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835822    8536 command_runner.go:130] ! I0610 12:13:25.917305       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835822    8536 command_runner.go:130] ! I0610 12:13:35.933505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835822    8536 command_runner.go:130] ! I0610 12:13:35.933609       1 main.go:227] handling current node
	I0610 12:32:08.835822    8536 command_runner.go:130] ! I0610 12:13:35.933623       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835896    8536 command_runner.go:130] ! I0610 12:13:35.933630       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835896    8536 command_runner.go:130] ! I0610 12:13:45.943963       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835975    8536 command_runner.go:130] ! I0610 12:13:45.944071       1 main.go:227] handling current node
	I0610 12:32:08.836022    8536 command_runner.go:130] ! I0610 12:13:45.944089       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836022    8536 command_runner.go:130] ! I0610 12:13:45.944114       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836107    8536 command_runner.go:130] ! I0610 12:13:55.953212       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836107    8536 command_runner.go:130] ! I0610 12:13:55.953354       1 main.go:227] handling current node
	I0610 12:32:08.836107    8536 command_runner.go:130] ! I0610 12:13:55.953371       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836189    8536 command_runner.go:130] ! I0610 12:13:55.953380       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836267    8536 command_runner.go:130] ! I0610 12:14:05.959968       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836267    8536 command_runner.go:130] ! I0610 12:14:05.960014       1 main.go:227] handling current node
	I0610 12:32:08.836267    8536 command_runner.go:130] ! I0610 12:14:05.960029       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836345    8536 command_runner.go:130] ! I0610 12:14:05.960036       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836400    8536 command_runner.go:130] ! I0610 12:14:15.970279       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836400    8536 command_runner.go:130] ! I0610 12:14:15.970375       1 main.go:227] handling current node
	I0610 12:32:08.836477    8536 command_runner.go:130] ! I0610 12:14:15.970391       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836533    8536 command_runner.go:130] ! I0610 12:14:15.970399       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836533    8536 command_runner.go:130] ! I0610 12:14:25.977769       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836533    8536 command_runner.go:130] ! I0610 12:14:25.977865       1 main.go:227] handling current node
	I0610 12:32:08.836610    8536 command_runner.go:130] ! I0610 12:14:25.977880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836665    8536 command_runner.go:130] ! I0610 12:14:25.977886       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836665    8536 command_runner.go:130] ! I0610 12:14:35.984527       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836714    8536 command_runner.go:130] ! I0610 12:14:35.984582       1 main.go:227] handling current node
	I0610 12:32:08.836758    8536 command_runner.go:130] ! I0610 12:14:35.984596       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836758    8536 command_runner.go:130] ! I0610 12:14:35.984604       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836758    8536 command_runner.go:130] ! I0610 12:14:46.000499       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836758    8536 command_runner.go:130] ! I0610 12:14:46.000612       1 main.go:227] handling current node
	I0610 12:32:08.836876    8536 command_runner.go:130] ! I0610 12:14:46.000635       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836938    8536 command_runner.go:130] ! I0610 12:14:46.000650       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836938    8536 command_runner.go:130] ! I0610 12:14:56.007468       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836993    8536 command_runner.go:130] ! I0610 12:14:56.007626       1 main.go:227] handling current node
	I0610 12:32:08.836993    8536 command_runner.go:130] ! I0610 12:14:56.007642       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.837050    8536 command_runner.go:130] ! I0610 12:14:56.007651       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.837050    8536 command_runner.go:130] ! I0610 12:15:06.022181       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.837104    8536 command_runner.go:130] ! I0610 12:15:06.022286       1 main.go:227] handling current node
	I0610 12:32:08.837104    8536 command_runner.go:130] ! I0610 12:15:06.022302       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.837104    8536 command_runner.go:130] ! I0610 12:15:06.022312       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.837162    8536 command_runner.go:130] ! I0610 12:15:16.038901       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.837162    8536 command_runner.go:130] ! I0610 12:15:16.038992       1 main.go:227] handling current node
	I0610 12:32:08.837213    8536 command_runner.go:130] ! I0610 12:15:16.039008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.837213    8536 command_runner.go:130] ! I0610 12:15:16.039016       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.837517    8536 command_runner.go:130] ! I0610 12:15:26.062184       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.837579    8536 command_runner.go:130] ! I0610 12:15:26.062279       1 main.go:227] handling current node
	I0610 12:32:08.837672    8536 command_runner.go:130] ! I0610 12:15:26.062296       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.837743    8536 command_runner.go:130] ! I0610 12:15:26.062304       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.837850    8536 command_runner.go:130] ! I0610 12:15:36.071408       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.837850    8536 command_runner.go:130] ! I0610 12:15:36.071540       1 main.go:227] handling current node
	I0610 12:32:08.838075    8536 command_runner.go:130] ! I0610 12:15:36.071556       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838134    8536 command_runner.go:130] ! I0610 12:15:36.071564       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838134    8536 command_runner.go:130] ! I0610 12:15:46.078051       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838221    8536 command_runner.go:130] ! I0610 12:15:46.078158       1 main.go:227] handling current node
	I0610 12:32:08.838221    8536 command_runner.go:130] ! I0610 12:15:46.078176       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838280    8536 command_runner.go:130] ! I0610 12:15:46.078184       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838280    8536 command_runner.go:130] ! I0610 12:15:56.086545       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838341    8536 command_runner.go:130] ! I0610 12:15:56.086647       1 main.go:227] handling current node
	I0610 12:32:08.838341    8536 command_runner.go:130] ! I0610 12:15:56.086663       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838341    8536 command_runner.go:130] ! I0610 12:15:56.086671       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838407    8536 command_runner.go:130] ! I0610 12:16:06.094871       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838407    8536 command_runner.go:130] ! I0610 12:16:06.094920       1 main.go:227] handling current node
	I0610 12:32:08.838407    8536 command_runner.go:130] ! I0610 12:16:06.094935       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838498    8536 command_runner.go:130] ! I0610 12:16:06.094958       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838498    8536 command_runner.go:130] ! I0610 12:16:16.109713       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838498    8536 command_runner.go:130] ! I0610 12:16:16.110282       1 main.go:227] handling current node
	I0610 12:32:08.838561    8536 command_runner.go:130] ! I0610 12:16:16.110679       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838561    8536 command_runner.go:130] ! I0610 12:16:16.110879       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838561    8536 command_runner.go:130] ! I0610 12:16:26.124392       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838619    8536 command_runner.go:130] ! I0610 12:16:26.124492       1 main.go:227] handling current node
	I0610 12:32:08.838619    8536 command_runner.go:130] ! I0610 12:16:26.124507       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838619    8536 command_runner.go:130] ! I0610 12:16:26.124514       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838619    8536 command_runner.go:130] ! I0610 12:16:36.130696       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840028    8536 command_runner.go:130] ! I0610 12:16:36.130864       1 main.go:227] handling current node
	I0610 12:32:08.840062    8536 command_runner.go:130] ! I0610 12:16:36.130880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840062    8536 command_runner.go:130] ! I0610 12:16:36.130888       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:46.145505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:46.145897       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:46.146067       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:46.146083       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:56.160466       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:56.160571       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:56.160586       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:56.160594       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:06.173930       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:06.173977       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:06.173992       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:06.173999       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:16.180797       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:16.180971       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:16.181005       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:16.181031       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:26.197081       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:26.197184       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:26.197201       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:26.197210       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:36.204586       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:36.204700       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:36.204716       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:36.204725       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:46.214904       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:46.215024       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:46.215040       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:46.215048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:56.228072       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:56.228173       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:56.228189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:56.228197       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:06.237192       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:06.237303       1 main.go:227] handling current node
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:06.237329       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:06.237354       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:16.244574       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:16.244799       1 main.go:227] handling current node
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:16.244837       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:16.244863       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:26.258608       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:26.258654       1 main.go:227] handling current node
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:26.258669       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:26.258676       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:36.264620       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:36.264824       1 main.go:227] handling current node
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:36.264841       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:36.264850       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:46.275317       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:46.275426       1 main.go:227] handling current node
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:46.275460       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:46.275469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:56.290965       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:56.291027       1 main.go:227] handling current node
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:56.291041       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:56.291048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:06.298370       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:06.298512       1 main.go:227] handling current node
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:06.298529       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:06.298537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:16.309110       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:16.309215       1 main.go:227] handling current node
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:16.309232       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:16.309240       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:26.322583       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:26.322633       1 main.go:227] handling current node
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:26.322647       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:26.322654       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:36.336250       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:36.336376       1 main.go:227] handling current node
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:36.336392       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:36.336400       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:46.350996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:46.351137       1 main.go:227] handling current node
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:46.351155       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:46.351164       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:56.356996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:56.357039       1 main.go:227] handling current node
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:19:56.357052       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:19:56.357059       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:20:06.372114       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:20:06.372883       1 main.go:227] handling current node
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:20:06.373032       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841494    8536 command_runner.go:130] ! I0610 12:20:06.373062       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:16.381023       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:16.381690       1 main.go:227] handling current node
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:16.381940       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:16.381975       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:26.389178       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:26.389224       1 main.go:227] handling current node
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:26.389240       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841671    8536 command_runner.go:130] ! I0610 12:20:26.389247       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841671    8536 command_runner.go:130] ! I0610 12:20:36.395687       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841671    8536 command_runner.go:130] ! I0610 12:20:36.395828       1 main.go:227] handling current node
	I0610 12:32:08.841671    8536 command_runner.go:130] ! I0610 12:20:36.395844       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841751    8536 command_runner.go:130] ! I0610 12:20:36.395851       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841751    8536 command_runner.go:130] ! I0610 12:20:46.410656       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:46.410865       1 main.go:227] handling current node
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:46.410882       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:46.410891       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:56.425296       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:56.425540       1 main.go:227] handling current node
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:56.425625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:56.425639       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:21:06.439346       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:21:06.439393       1 main.go:227] handling current node
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:21:06.439406       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:21:06.439413       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841959    8536 command_runner.go:130] ! I0610 12:21:16.450424       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:16.450594       1 main.go:227] handling current node
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:16.450628       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:16.450821       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:26.458379       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:26.458487       1 main.go:227] handling current node
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:26.458503       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:26.458511       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:36.474243       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:36.474337       1 main.go:227] handling current node
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:36.474354       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:36.474362       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:46.486635       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:46.486679       1 main.go:227] handling current node
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:46.486693       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:46.486700       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:56.502256       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:56.502361       1 main.go:227] handling current node
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:56.502377       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:56.502386       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:22:06.508796       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:22:06.508911       1 main.go:227] handling current node
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:22:06.508928       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842359    8536 command_runner.go:130] ! I0610 12:22:06.508957       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842359    8536 command_runner.go:130] ! I0610 12:22:16.523863       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842359    8536 command_runner.go:130] ! I0610 12:22:16.523952       1 main.go:227] handling current node
	I0610 12:32:08.842359    8536 command_runner.go:130] ! I0610 12:22:16.523970       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842428    8536 command_runner.go:130] ! I0610 12:22:16.523979       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842428    8536 command_runner.go:130] ! I0610 12:22:26.531516       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842428    8536 command_runner.go:130] ! I0610 12:22:26.531621       1 main.go:227] handling current node
	I0610 12:32:08.842491    8536 command_runner.go:130] ! I0610 12:22:26.531637       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842491    8536 command_runner.go:130] ! I0610 12:22:26.531645       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842491    8536 command_runner.go:130] ! I0610 12:22:36.546403       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842491    8536 command_runner.go:130] ! I0610 12:22:36.546510       1 main.go:227] handling current node
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:36.546525       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:36.546533       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:46.603429       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:46.603565       1 main.go:227] handling current node
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:46.603581       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842638    8536 command_runner.go:130] ! I0610 12:22:46.603590       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842660    8536 command_runner.go:130] ! I0610 12:22:56.619134       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842660    8536 command_runner.go:130] ! I0610 12:22:56.619253       1 main.go:227] handling current node
	I0610 12:32:08.842660    8536 command_runner.go:130] ! I0610 12:22:56.619287       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842660    8536 command_runner.go:130] ! I0610 12:22:56.619296       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842734    8536 command_runner.go:130] ! I0610 12:23:06.634307       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842734    8536 command_runner.go:130] ! I0610 12:23:06.634399       1 main.go:227] handling current node
	I0610 12:32:08.842734    8536 command_runner.go:130] ! I0610 12:23:06.634415       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842734    8536 command_runner.go:130] ! I0610 12:23:06.634424       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842796    8536 command_runner.go:130] ! I0610 12:23:16.649341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842796    8536 command_runner.go:130] ! I0610 12:23:16.649508       1 main.go:227] handling current node
	I0610 12:32:08.842796    8536 command_runner.go:130] ! I0610 12:23:16.649527       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842796    8536 command_runner.go:130] ! I0610 12:23:16.649539       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842864    8536 command_runner.go:130] ! I0610 12:23:26.662421       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842864    8536 command_runner.go:130] ! I0610 12:23:26.662451       1 main.go:227] handling current node
	I0610 12:32:08.842864    8536 command_runner.go:130] ! I0610 12:23:26.662462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842925    8536 command_runner.go:130] ! I0610 12:23:26.662468       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842925    8536 command_runner.go:130] ! I0610 12:23:36.669686       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842925    8536 command_runner.go:130] ! I0610 12:23:36.669734       1 main.go:227] handling current node
	I0610 12:32:08.842925    8536 command_runner.go:130] ! I0610 12:23:36.669822       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842988    8536 command_runner.go:130] ! I0610 12:23:36.669831       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842988    8536 command_runner.go:130] ! I0610 12:23:46.678078       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842988    8536 command_runner.go:130] ! I0610 12:23:46.678194       1 main.go:227] handling current node
	I0610 12:32:08.843051    8536 command_runner.go:130] ! I0610 12:23:46.678209       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843051    8536 command_runner.go:130] ! I0610 12:23:46.678217       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843051    8536 command_runner.go:130] ! I0610 12:23:56.685841       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843051    8536 command_runner.go:130] ! I0610 12:23:56.685884       1 main.go:227] handling current node
	I0610 12:32:08.843114    8536 command_runner.go:130] ! I0610 12:23:56.685898       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843150    8536 command_runner.go:130] ! I0610 12:23:56.685905       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843168    8536 command_runner.go:130] ! I0610 12:24:06.692341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843168    8536 command_runner.go:130] ! I0610 12:24:06.692609       1 main.go:227] handling current node
	I0610 12:32:08.843194    8536 command_runner.go:130] ! I0610 12:24:06.692699       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843194    8536 command_runner.go:130] ! I0610 12:24:06.692856       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843254    8536 command_runner.go:130] ! I0610 12:24:16.700494       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:16.700609       1 main.go:227] handling current node
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:16.700625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:16.700633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:26.716495       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:26.716609       1 main.go:227] handling current node
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:26.716625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:26.716633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:36.723606       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:36.723716       1 main.go:227] handling current node
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:36.723733       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843420    8536 command_runner.go:130] ! I0610 12:24:36.724254       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843420    8536 command_runner.go:130] ! I0610 12:24:46.739916       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843420    8536 command_runner.go:130] ! I0610 12:24:46.740008       1 main.go:227] handling current node
	I0610 12:32:08.843487    8536 command_runner.go:130] ! I0610 12:24:46.740402       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843487    8536 command_runner.go:130] ! I0610 12:24:46.740432       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:24:56.759676       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:24:56.760848       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:24:56.760902       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:24:56.760914       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:06.771450       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:06.771514       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:06.771530       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:06.771537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:16.778338       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:16.778445       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:16.778461       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:16.778469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:26.791778       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:26.791933       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:26.791950       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:26.791974       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:36.800633       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:36.800842       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:36.800860       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:36.800869       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:46.815290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:46.815339       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:46.815355       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:46.815363       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.830374       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.830439       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.830471       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.830478       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.831222       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.831411       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.831494       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.840295       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.840446       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.840464       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.840913       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.845129       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.845329       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.860365       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.860476       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.860493       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.860502       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.861223       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.861379       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:26.873719       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:26.873964       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:26.874016       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:26.874181       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:26.874413       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:26.874451       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881254       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881366       1 main.go:227] handling current node
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881382       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881407       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881814       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881908       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:46.900700       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:46.900797       1 main.go:227] handling current node
	I0610 12:32:08.844245    8536 command_runner.go:130] ! I0610 12:26:46.900815       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844245    8536 command_runner.go:130] ! I0610 12:26:46.900823       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844245    8536 command_runner.go:130] ! I0610 12:26:46.900956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:46.900985       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907395       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907412       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907420       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907548       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907656       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922305       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922349       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922361       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922367       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922490       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922515       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.929579       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.929687       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.929704       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.929712       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.930550       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.930641       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.944603       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.944719       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.944772       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.945138       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.945535       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.945625       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:36.955188       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:36.955329       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:36.955462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844977    8536 command_runner.go:130] ! I0610 12:27:36.955581       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844977    8536 command_runner.go:130] ! I0610 12:27:36.955956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.845205    8536 command_runner.go:130] ! I0610 12:27:36.956158       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.845371    8536 command_runner.go:130] ! I0610 12:27:46.965590       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.965717       1 main.go:227] handling current node
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.965826       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.965836       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.966598       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.966708       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:56.999276       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:56.999553       1 main.go:227] handling current node
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:56.999711       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:56.999728       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:57.000088       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:57.000177       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015069       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015281       1 main.go:227] handling current node
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015300       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015308       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015707       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015928       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.865123    8536 logs.go:123] Gathering logs for describe nodes ...
	I0610 12:32:08.866108    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 12:32:09.099969    8536 command_runner.go:130] > Name:               multinode-813300
	I0610 12:32:09.099969    8536 command_runner.go:130] > Roles:              control-plane
	I0610 12:32:09.100040    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:09.100040    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:09.100040    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:09.100040    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300
	I0610 12:32:09.100081    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:09.100081    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0610 12:32:09.100184    8536 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0610 12:32:09.100184    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:09.100184    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:09.100184    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:09.100233    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	I0610 12:32:09.100277    8536 command_runner.go:130] > Taints:             <none>
	I0610 12:32:09.100277    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:09.100277    8536 command_runner.go:130] > Lease:
	I0610 12:32:09.100277    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300
	I0610 12:32:09.100277    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:09.100323    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:32:00 +0000
	I0610 12:32:09.100323    8536 command_runner.go:130] > Conditions:
	I0610 12:32:09.100361    8536 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0610 12:32:09.100402    8536 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0610 12:32:09.100402    8536 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0610 12:32:09.100402    8536 command_runner.go:130] >   DiskPressure     False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0610 12:32:09.100402    8536 command_runner.go:130] >   PIDPressure      False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0610 12:32:09.100402    8536 command_runner.go:130] >   Ready            True    Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:31:40 +0000   KubeletReady                 kubelet is posting ready status
	I0610 12:32:09.100402    8536 command_runner.go:130] > Addresses:
	I0610 12:32:09.100402    8536 command_runner.go:130] >   InternalIP:  172.17.150.144
	I0610 12:32:09.100402    8536 command_runner.go:130] >   Hostname:    multinode-813300
	I0610 12:32:09.100402    8536 command_runner.go:130] > Capacity:
	I0610 12:32:09.100402    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.100402    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.100550    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.100550    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.100550    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.100550    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:09.100550    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.100550    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.100550    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.100550    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.100550    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.100550    8536 command_runner.go:130] > System Info:
	I0610 12:32:09.100550    8536 command_runner.go:130] >   Machine ID:                 8363a852b0fa420a8dccb009e6f4f9c7
	I0610 12:32:09.100550    8536 command_runner.go:130] >   System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	I0610 12:32:09.100678    8536 command_runner.go:130] >   Boot ID:                    a60b688f-6b78-4fa5-b21e-96a64e5c1047
	I0610 12:32:09.100678    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:09.100719    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:09.100719    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:09.100719    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:09.100761    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:09.100761    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:09.100796    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:09.100810    8536 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0610 12:32:09.100835    8536 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0610 12:32:09.100864    8536 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0610 12:32:09.100864    8536 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:09.100915    8536 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0610 12:32:09.100957    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-z28tq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 etcd-multinode-813300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         69s
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kindnet-29gbv                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-813300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-813300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kube-proxy-nrpvt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-813300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0610 12:32:09.100957    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:09.100957    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Resource           Requests     Limits
	I0610 12:32:09.100957    8536 command_runner.go:130] >   --------           --------     ------
	I0610 12:32:09.100957    8536 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0610 12:32:09.100957    8536 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0610 12:32:09.100957    8536 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0610 12:32:09.100957    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0610 12:32:09.100957    8536 command_runner.go:130] > Events:
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:09.100957    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-813300 status is now: NodeReady
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:09.101544    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:09.101544    8536 command_runner.go:130] >   Normal  RegisteredNode           57s                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:09.101591    8536 command_runner.go:130] > Name:               multinode-813300-m02
	I0610 12:32:09.101591    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:09.101591    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m02
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:09.101696    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:09.101739    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	I0610 12:32:09.101739    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:09.101739    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:09.101739    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:09.101792    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:09.101792    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	I0610 12:32:09.101826    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:09.101826    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:09.101857    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:09.101857    8536 command_runner.go:130] > Lease:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m02
	I0610 12:32:09.101857    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:09.101857    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:30 +0000
	I0610 12:32:09.101857    8536 command_runner.go:130] > Conditions:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:09.101857    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:09.101857    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.101857    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.101857    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.101857    8536 command_runner.go:130] > Addresses:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   InternalIP:  172.17.151.128
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Hostname:    multinode-813300-m02
	I0610 12:32:09.101857    8536 command_runner.go:130] > Capacity:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.101857    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.101857    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.101857    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.101857    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.101857    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.101857    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.101857    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.101857    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.101857    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.101857    8536 command_runner.go:130] > System Info:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	I0610 12:32:09.101857    8536 command_runner.go:130] >   System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:09.101857    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:09.101857    8536 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0610 12:32:09.101857    8536 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0610 12:32:09.101857    8536 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:09.101857    8536 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0610 12:32:09.101857    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-czxmt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0610 12:32:09.101857    8536 command_runner.go:130] >   kube-system                 kindnet-r4nfq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0610 12:32:09.101857    8536 command_runner.go:130] >   kube-system                 kube-proxy-rx2b2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0610 12:32:09.102436    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:09.102436    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:09.102436    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:09.102436    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:09.102503    8536 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0610 12:32:09.102503    8536 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0610 12:32:09.102503    8536 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0610 12:32:09.102503    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0610 12:32:09.102503    8536 command_runner.go:130] > Events:
	I0610 12:32:09.102503    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:09.102503    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:09.102503    8536 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0610 12:32:09.102628    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	I0610 12:32:09.102628    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	I0610 12:32:09.102689    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	I0610 12:32:09.102689    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:09.102689    8536 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:09.102745    8536 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-813300-m02 status is now: NodeReady
	I0610 12:32:09.102812    8536 command_runner.go:130] >   Normal  NodeNotReady             3m54s              node-controller  Node multinode-813300-m02 status is now: NodeNotReady
	I0610 12:32:09.102812    8536 command_runner.go:130] >   Normal  RegisteredNode           57s                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:09.102812    8536 command_runner.go:130] > Name:               multinode-813300-m03
	I0610 12:32:09.102812    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:09.102812    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m03
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_25_53_0700
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:09.102812    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:09.102812    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:25:52 +0000
	I0610 12:32:09.102812    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:09.102812    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:09.102812    8536 command_runner.go:130] > Lease:
	I0610 12:32:09.102812    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m03
	I0610 12:32:09.102812    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:09.102812    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:04 +0000
	I0610 12:32:09.102812    8536 command_runner.go:130] > Conditions:
	I0610 12:32:09.102812    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:09.103889    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:09.103923    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.103970    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.104005    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.104005    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.104005    8536 command_runner.go:130] > Addresses:
	I0610 12:32:09.104005    8536 command_runner.go:130] >   InternalIP:  172.17.144.46
	I0610 12:32:09.104048    8536 command_runner.go:130] >   Hostname:    multinode-813300-m03
	I0610 12:32:09.104048    8536 command_runner.go:130] > Capacity:
	I0610 12:32:09.104048    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.104048    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.104083    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.104083    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.104083    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.104083    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:09.104083    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.104083    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.104083    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.104135    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.104135    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.104135    8536 command_runner.go:130] > System Info:
	I0610 12:32:09.104135    8536 command_runner.go:130] >   Machine ID:                 2d60e1f6e3b2454db505a650eae61212
	I0610 12:32:09.104169    8536 command_runner.go:130] >   System UUID:                b38b4a9a-39f6-6f43-9e6d-19433dc62cd9
	I0610 12:32:09.104169    8536 command_runner.go:130] >   Boot ID:                    0a419483-5289-4d17-96c2-fd4487360412
	I0610 12:32:09.104169    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:09.104210    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:09.104210    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:09.104246    8536 command_runner.go:130] > PodCIDR:                      10.244.2.0/24
	I0610 12:32:09.104246    8536 command_runner.go:130] > PodCIDRs:                     10.244.2.0/24
	I0610 12:32:09.104246    8536 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:09.104246    8536 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0610 12:32:09.104246    8536 command_runner.go:130] >   kube-system                 kindnet-2pc4j       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	I0610 12:32:09.104246    8536 command_runner.go:130] >   kube-system                 kube-proxy-vw56h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	I0610 12:32:09.104246    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:09.104246    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:09.104246    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:09.104246    8536 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0610 12:32:09.104246    8536 command_runner.go:130] > Events:
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0610 12:32:09.104246    8536 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  Starting                 6m4s                   kube-proxy       
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m17s (x2 over 6m17s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientMemory
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m17s (x2 over 6m17s)  kubelet          Node multinode-813300-m03 status is now: NodeHasNoDiskPressure
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m17s (x2 over 6m17s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientPID
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:09.104799    8536 command_runner.go:130] >   Normal  RegisteredNode           6m15s                  node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:09.104854    8536 command_runner.go:130] >   Normal  NodeReady                5m56s                  kubelet          Node multinode-813300-m03 status is now: NodeReady
	I0610 12:32:09.104854    8536 command_runner.go:130] >   Normal  NodeNotReady             4m25s                  node-controller  Node multinode-813300-m03 status is now: NodeNotReady
	I0610 12:32:09.104922    8536 command_runner.go:130] >   Normal  RegisteredNode           57s                    node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:09.115578    8536 logs.go:123] Gathering logs for kube-proxy [afad8b05897e] ...
	I0610 12:32:09.115578    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 afad8b05897e"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:09.146444    8536 command_runner.go:130] ! I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:09.148238    8536 logs.go:123] Gathering logs for kube-controller-manager [3bee53d5fef9] ...
	I0610 12:32:09.148238    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bee53d5fef9"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:56.976566       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.260708       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.260892       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.266101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.267393       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.268203       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.268377       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.430160       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.430459       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.456745       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.457409       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.457489       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.457839       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.509226       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.512712       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.512947       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.517463       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.520424       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.528150       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.528371       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.528506       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.528651       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.533407       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.543133       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.548293       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.548310       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.548473       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:09.183300    8536 command_runner.go:130] ! I0610 12:31:01.548492       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:09.183300    8536 command_runner.go:130] ! I0610 12:31:01.548660       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:09.183353    8536 command_runner.go:130] ! I0610 12:31:01.548672       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:09.183353    8536 command_runner.go:130] ! I0610 12:31:01.595194       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:09.183353    8536 command_runner.go:130] ! I0610 12:31:01.595266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:09.183353    8536 command_runner.go:130] ! I0610 12:31:01.595295       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:09.183430    8536 command_runner.go:130] ! I0610 12:31:01.595320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:09.183479    8536 command_runner.go:130] ! I0610 12:31:01.595340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:09.183527    8536 command_runner.go:130] ! I0610 12:31:01.595360       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595402       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595465       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! W0610 12:31:01.595507       1 shared_informer.go:597] resyncPeriod 13h16m37.278540311s is smaller than resyncCheckPeriod 16h53m16.378760609s and the informer has already started. Changing it to 16h53m16.378760609s
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595706       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595754       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595782       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595923       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595956       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597416       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597453       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597489       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597516       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597922       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597937       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.598081       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.614277       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.614469       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.614504       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.618176       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.618586       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.618885       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.623374       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.624235       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.624265       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.629921       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:09.184224    8536 command_runner.go:130] ! I0610 12:31:01.630154       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:09.184224    8536 command_runner.go:130] ! I0610 12:31:01.630164       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:09.184293    8536 command_runner.go:130] ! I0610 12:31:01.634130       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:09.184293    8536 command_runner.go:130] ! I0610 12:31:01.634452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:09.184293    8536 command_runner.go:130] ! I0610 12:31:01.634467       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:09.184293    8536 command_runner.go:130] ! I0610 12:31:01.639133       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:09.184385    8536 command_runner.go:130] ! I0610 12:31:01.639154       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:09.184429    8536 command_runner.go:130] ! I0610 12:31:01.639163       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:09.184429    8536 command_runner.go:130] ! I0610 12:31:01.639622       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:09.184429    8536 command_runner.go:130] ! I0610 12:31:01.639640       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:09.184479    8536 command_runner.go:130] ! I0610 12:31:01.643940       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:09.184479    8536 command_runner.go:130] ! I0610 12:31:01.644017       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:09.184531    8536 command_runner.go:130] ! I0610 12:31:01.644031       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:09.184531    8536 command_runner.go:130] ! I0610 12:31:01.652714       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:09.184531    8536 command_runner.go:130] ! I0610 12:31:01.657163       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:09.184579    8536 command_runner.go:130] ! I0610 12:31:01.657350       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:09.184579    8536 command_runner.go:130] ! E0610 12:31:01.664322       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:09.184640    8536 command_runner.go:130] ! I0610 12:31:01.664388       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:09.184698    8536 command_runner.go:130] ! I0610 12:31:01.694061       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:09.184732    8536 command_runner.go:130] ! I0610 12:31:01.694262       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:09.184732    8536 command_runner.go:130] ! I0610 12:31:01.694273       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:09.184782    8536 command_runner.go:130] ! I0610 12:31:01.722911       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:09.184815    8536 command_runner.go:130] ! I0610 12:31:01.725806       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:09.184815    8536 command_runner.go:130] ! I0610 12:31:01.726026       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:09.184844    8536 command_runner.go:130] ! I0610 12:31:01.734788       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:09.184880    8536 command_runner.go:130] ! I0610 12:31:01.735047       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:09.184921    8536 command_runner.go:130] ! I0610 12:31:01.735083       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:09.184921    8536 command_runner.go:130] ! I0610 12:31:01.759990       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:09.184960    8536 command_runner.go:130] ! I0610 12:31:01.761603       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.761772       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.769963       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.773525       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.773866       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.773998       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.778762       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.778803       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.778833       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.779416       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.779429       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.779447       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.780731       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.782261       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.783730       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.782277       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.782337       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.784928       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.782348       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.813253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.813374       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.813998       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.815397       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.818405       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.818514       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.819007       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.819350       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.821748       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.821802       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.822113       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.822204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.822232       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.826332       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.826815       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.826831       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:09.185747    8536 command_runner.go:130] ! E0610 12:31:11.830024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.830417       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.835752       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.836296       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.836330       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.839311       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.839512       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.839590       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.842028       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.842220       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.842603       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.842639       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.845940       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.846359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.846982       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.849897       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.850381       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.850613       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.853131       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.853418       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.853675       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.856318       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.856441       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.856643       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.856381       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.902405       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.903166       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.906707       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.907117       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.907152       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.910144       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.910388       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.910498       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.913998       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.914276       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.915779       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.916916       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.917975       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.918292       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.930523       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:09.186543    8536 command_runner.go:130] ! I0610 12:31:11.947621       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:09.186543    8536 command_runner.go:130] ! I0610 12:31:11.948394       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:09.186631    8536 command_runner.go:130] ! I0610 12:31:11.948768       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:09.186631    8536 command_runner.go:130] ! I0610 12:31:11.954911       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:09.186631    8536 command_runner.go:130] ! I0610 12:31:11.957486       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.963420       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.973610       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.979167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.980674       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.984963       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.985188       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.994612       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.003389       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.007898       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.011185       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.013303       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.014815       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.016632       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.016812       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.016947       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.017245       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.017927       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.018270       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.019668       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.019818       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.023667       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.024171       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.025888       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.026414       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.026742       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.026899       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.031613       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.035671       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.038980       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.040498       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.044612       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.044983       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.048630       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.048809       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.050934       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.051748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.77596ms"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.058669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.911µs"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.061957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.647762ms"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.062771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="326.05µs"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.074892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.074973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.075004       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.075594       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.130853       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.140823       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.147492       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.174418       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.201305       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.218626       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:12.243193       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:12.658052       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:12.658432       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:12.674720       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:42.085794       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.626500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.481917ms"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.626834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.891µs"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.653330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="217.376µs"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.704393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.856077ms"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.705453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.995µs"
	I0610 12:32:09.204030    8536 logs.go:123] Gathering logs for container status ...
	I0610 12:32:09.204030    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 12:32:09.281479    8536 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0610 12:32:09.282057    8536 command_runner.go:130] > b9550940a81ca       8c811b4aec35f                                                                                         5 seconds ago        Running             busybox                   1                   c4d124cebb3b3       busybox-fc5497c4f-z28tq
	I0610 12:32:09.282057    8536 command_runner.go:130] > 24f3f7e041f98       cbb01a7bd410d                                                                                         5 seconds ago        Running             coredns                   1                   241c4811748fa       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:09.282105    8536 command_runner.go:130] > e934ffe0f9032       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   2dd9b423841c9       storage-provisioner
	I0610 12:32:09.282105    8536 command_runner.go:130] > c3c4316beca64       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   0c19b39e15f6a       kindnet-29gbv
	I0610 12:32:09.282173    8536 command_runner.go:130] > cc9dbe4aa4005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   2dd9b423841c9       storage-provisioner
	I0610 12:32:09.282214    8536 command_runner.go:130] > 1de5fa0ef8384       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   06d997d7c306c       kube-proxy-nrpvt
	I0610 12:32:09.282214    8536 command_runner.go:130] > d7941126134f2       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   5c3da3b59b527       kube-apiserver-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > 877ee07c14997       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   b13c0058ce265       etcd-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > d90e72ef46704       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   8902dac03acbc       kube-scheduler-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > 3bee53d5fef91       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   f56cc8af37db0       kube-controller-manager-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > 91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	I0610 12:32:09.282214    8536 command_runner.go:130] > f2e39052db195       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:09.282214    8536 command_runner.go:130] > c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	I0610 12:32:09.282214    8536 command_runner.go:130] > afad8b05897e5       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	I0610 12:32:09.282214    8536 command_runner.go:130] > bd1a6cd987430       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > f1409bf44ff14       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	I0610 12:32:09.286325    8536 logs.go:123] Gathering logs for dmesg ...
	I0610 12:32:09.286325    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 12:32:09.313305    8536 command_runner.go:130] > [Jun10 12:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.132459] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.024371] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.082449] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.022513] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0610 12:32:09.313871    8536 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +5.764981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +1.334692] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +1.227872] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +7.275008] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0610 12:32:09.314038    8536 command_runner.go:130] > [Jun10 12:30] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.213819] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [ +29.247267] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.109477] kauditd_printk_skb: 73 callbacks suppressed
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.638576] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.214581] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.255487] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +3.027967] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.239865] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.216732] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.314976] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.112938] kauditd_printk_skb: 183 callbacks suppressed
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.871081] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +5.053506] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.123809] kauditd_printk_skb: 34 callbacks suppressed
	I0610 12:32:09.314244    8536 command_runner.go:130] > [Jun10 12:31] kauditd_printk_skb: 62 callbacks suppressed
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +3.513215] hrtimer: interrupt took 368589 ns
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.107277] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +7.541664] kauditd_printk_skb: 70 callbacks suppressed
	I0610 12:32:09.316399    8536 logs.go:123] Gathering logs for kindnet [c3c4316beca6] ...
	I0610 12:32:09.316475    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c4316beca6"
	I0610 12:32:09.350491    8536 command_runner.go:130] ! I0610 12:31:02.264969       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 12:32:09.350491    8536 command_runner.go:130] ! I0610 12:31:02.265572       1 main.go:107] hostIP = 172.17.150.144
	I0610 12:32:09.350491    8536 command_runner.go:130] ! podIP = 172.17.150.144
	I0610 12:32:09.350491    8536 command_runner.go:130] ! I0610 12:31:02.265708       1 main.go:116] setting mtu 1500 for CNI 
	I0610 12:32:09.350491    8536 command_runner.go:130] ! I0610 12:31:02.265761       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 12:32:09.351425    8536 command_runner.go:130] ! I0610 12:31:02.265778       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 12:32:09.351425    8536 command_runner.go:130] ! I0610 12:31:32.684223       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0610 12:32:09.351483    8536 command_runner.go:130] ! I0610 12:31:32.703397       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:09.351483    8536 command_runner.go:130] ! I0610 12:31:32.703595       1 main.go:227] handling current node
	I0610 12:32:09.351483    8536 command_runner.go:130] ! I0610 12:31:32.742189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:09.351530    8536 command_runner.go:130] ! I0610 12:31:32.742230       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:09.351564    8536 command_runner.go:130] ! I0610 12:31:32.742783       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.151.128 Flags: [] Table: 0} 
	I0610 12:32:09.351586    8536 command_runner.go:130] ! I0610 12:31:32.743097       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:09.351586    8536 command_runner.go:130] ! I0610 12:31:32.743120       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:09.351641    8536 command_runner.go:130] ! I0610 12:31:32.743193       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750326       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750472       1 main.go:227] handling current node
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750487       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750494       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750648       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750678       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767023       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767174       1 main.go:227] handling current node
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767191       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767199       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767842       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767929       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.782886       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.782992       1 main.go:227] handling current node
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.783008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.783073       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.783951       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.784044       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:09.354834    8536 logs.go:123] Gathering logs for Docker ...
	I0610 12:32:09.354834    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 12:32:09.394636    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:09.394738    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:09.394837    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:09.394837    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:09.394896    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:09.394954    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:09.395023    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:09.395023    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.395083    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:09.395285    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.395348    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:09.395348    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:09.395416    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:09.395491    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:09.395491    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:09.395625    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:09.395625    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:09.395691    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.395747    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0610 12:32:09.395747    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.395809    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:09.395866    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:09.395927    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:09.395984    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:09.395984    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:09.396048    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:09.396048    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:09.396107    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.396172    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0610 12:32:09.396172    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.396247    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0610 12:32:09.396247    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:09.396306    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.396355    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:09.396428    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.665734294Z" level=info msg="Starting up"
	I0610 12:32:09.396491    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.666799026Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:09.396547    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.668025832Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0610 12:32:09.396611    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.707077561Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:09.396668    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745342414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:09.396668    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745425201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:09.396730    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745528085Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:09.396835    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745580077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.396880    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746319960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.396943    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746463837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397006    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746722696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.397063    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746775088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397127    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746796184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:09.397187    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746813182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397187    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.747203320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397251    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.748049086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397309    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752393000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.397370    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752519780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397507    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752692453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.397588    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752790737Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:09.397588    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753305956Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:09.397733    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753420338Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:09.397798    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753439135Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:09.397798    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759080243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:09.397866    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759316106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:09.397931    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759347801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:09.397996    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759374497Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:09.398102    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759392594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:09.398168    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759476281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:09.398228    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759928509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:09.398314    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760128877Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:09.398356    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760824467Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:09.398405    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760850663Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:09.398491    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760867361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398575    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760883758Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398636    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760898556Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398636    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760914553Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398716    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760935350Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398771    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760951047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398889    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760966645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398889    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760986442Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398953    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761064230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399009    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761105323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399071    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761128319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399177    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761143417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399224    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761158215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399224    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761173012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399297    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761187310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399358    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761210007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399413    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761455768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399477    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761477764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399535    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761493962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399535    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761507660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399622    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761522057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399697    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761538755Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:09.399753    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761561351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399753    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761583448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399816    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761598445Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:09.399873    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761652437Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:09.399990    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761676833Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:09.400055    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761691230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:09.400122    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761709928Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:09.400242    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761721526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.400287    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761735324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:09.400343    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761752021Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:09.400406    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762164056Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:09.400462    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762290536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:09.400524    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762532698Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:09.400585    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762557794Z" level=info msg="containerd successfully booted in 0.059804s"
	I0610 12:32:09.400639    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.723660372Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:09.400639    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.979070633Z" level=info msg="Loading containers: start."
	I0610 12:32:09.400705    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.430556665Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:09.400765    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.525359393Z" level=info msg="Loading containers: done."
	I0610 12:32:09.400819    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.555368825Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:09.400919    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.556499190Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:09.400977    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614621979Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:09.400977    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614710469Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:09.401043    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:09.401043    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.105858304Z" level=info msg="Processing signal 'terminated'"
	I0610 12:32:09.401100    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.107858244Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0610 12:32:09.401163    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 systemd[1]: Stopping Docker Application Container Engine...
	I0610 12:32:09.401218    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109274172Z" level=info msg="Daemon shutdown complete"
	I0610 12:32:09.401280    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109439076Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0610 12:32:09.401353    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109591179Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0610 12:32:09.401414    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: docker.service: Deactivated successfully.
	I0610 12:32:09.401414    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Stopped Docker Application Container Engine.
	I0610 12:32:09.401478    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:09.401530    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.200932485Z" level=info msg="Starting up"
	I0610 12:32:09.401943    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.202989526Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:09.401943    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.204789062Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0610 12:32:09.402075    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.250167169Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:09.402164    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291799101Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:09.402215    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291856902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:09.402268    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291930003Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:09.402268    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291948904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402268    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291983304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291997405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292182308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292287811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292310511Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292322911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292350212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292701119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.295953884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296063086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296411793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296455694Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296587396Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296721299Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296741600Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296941504Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297027105Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297046206Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297078906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297254610Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297334111Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297955024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298031825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298071126Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298090126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298105527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298120527Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298155728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298172828Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298189828Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298204229Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298218329Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298230929Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298260030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298281530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298296531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298318131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298333531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298494735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298514735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298529635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298592837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298610037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298624437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298639137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298652438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298669738Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298693539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298708139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298720839Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298773440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298792441Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298805041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298820841Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298832741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298850742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298862942Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299109447Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299202249Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299272150Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299312051Z" level=info msg="containerd successfully booted in 0.052836s"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.253253712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.287070988Z" level=info msg="Loading containers: start."
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.612574192Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.704084520Z" level=info msg="Loading containers: done."
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733112200Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733256003Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.788468006Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.790252742Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Loaded network plugin cni"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start cri-dockerd grpc backend"
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-kbhvv_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c\""
	I0610 12:32:09.404416    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-z28tq_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d\""
	I0610 12:32:09.406780    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013449453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.406909    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013587556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.406959    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013608856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407041    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013775860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407092    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.087769538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407092    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089579074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407150    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089879880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407202    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.090133785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407257    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183156944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407310    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183215145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407366    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183227346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407366    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183318447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407417    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f56cc8af37db0f3fea8de363d927c6924c7ad7e81f4908f6f5c87d6c0db17a61/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.407470    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244245765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407521    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244411968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407591    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244427968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244593672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8902dac03acbce14b7e106bff482e591dd574972082943e9adda30969716a707/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b13c0058ce265f3c4b18ec59cbb42b72803807a8d96330756114b2526fffa2de/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611175897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611296299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611337700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.612109315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730665784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730725385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730738886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730907689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848373736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848822145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851216993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851612501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900274973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900404876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.408698    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900419576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.408698    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900508378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.408698    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:59Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0610 12:32:09.408838    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830014876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.408998    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830867993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.408998    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831086098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409057    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831510106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409057    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854754571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409057    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854918174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409147    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.857723530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409184    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.858668949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409184    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.877394923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409256    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878360042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409256    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878507645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409301    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.879086357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409301    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d997d7c306c2a08fab9e0e53bd14a9da495d8b0abdad38c9935489b788eccd/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.409365    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2dd9b423841c9fee92dc2a884fe8f45fb9dd5b8713214ce8804ac8ced10629d1/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.409398    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337790622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409398    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337963526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409398    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337992226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409398    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.338102629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409526    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.394005846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409560    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396505296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409560    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396667999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409607    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396999105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409640    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c19b39e15f6ae82627ffedaf799ef63dd09554d65260dbfc8856b08a4ce7354/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.409640    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.711733694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409690    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712144402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409723    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712256705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409813    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712964519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409813    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.980963328Z" level=info msg="shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:09.409861    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981035932Z" level=warning msg="cleaning up after shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:09.409861    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981047633Z" level=info msg="cleaning up dead shim" namespace=moby
	I0610 12:32:09.409893    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1052]: time="2024-06-10T12:31:31.981399154Z" level=info msg="ignoring event" container=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0610 12:32:09.410009    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.486941957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410009    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487165464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410062    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487187665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410096    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.488142597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410096    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345354892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410174    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345592698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410174    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345620799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410225    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345913706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410275    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.511059667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410275    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512286197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410316    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512437501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410350    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512775109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410391    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241c4811748facbb85003522d513039c3dfc5b38006b7f1cba90a5e411055e97/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.410425    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4d124cebb3b3affe7ace090f1a152544207db26621b5b4098cad87e3db47a4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0610 12:32:09.410466    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955148547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410466    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955266050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410499    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955283650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410540    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955812861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410581    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444723816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410622    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444892597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410655    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444914895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410705    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.445846695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:11.966953    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:32:12.000238    8536 command_runner.go:130] > 1892
	I0610 12:32:12.000238    8536 api_server.go:72] duration metric: took 1m7.4789712s to wait for apiserver process to appear ...
	I0610 12:32:12.000238    8536 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:32:12.010491    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 12:32:12.040772    8536 command_runner.go:130] > d7941126134f
	I0610 12:32:12.040772    8536 logs.go:276] 1 containers: [d7941126134f]
	I0610 12:32:12.049441    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 12:32:12.078487    8536 command_runner.go:130] > 877ee07c1499
	I0610 12:32:12.078487    8536 logs.go:276] 1 containers: [877ee07c1499]
	I0610 12:32:12.087877    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 12:32:12.114066    8536 command_runner.go:130] > 24f3f7e041f9
	I0610 12:32:12.114612    8536 command_runner.go:130] > f2e39052db19
	I0610 12:32:12.114680    8536 logs.go:276] 2 containers: [24f3f7e041f9 f2e39052db19]
	I0610 12:32:12.123355    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 12:32:12.156483    8536 command_runner.go:130] > d90e72ef4670
	I0610 12:32:12.156483    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:32:12.156483    8536 logs.go:276] 2 containers: [d90e72ef4670 bd1a6cd98743]
	I0610 12:32:12.166208    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 12:32:12.192177    8536 command_runner.go:130] > 1de5fa0ef838
	I0610 12:32:12.192177    8536 command_runner.go:130] > afad8b05897e
	I0610 12:32:12.192177    8536 logs.go:276] 2 containers: [1de5fa0ef838 afad8b05897e]
	I0610 12:32:12.202221    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 12:32:12.224741    8536 command_runner.go:130] > 3bee53d5fef9
	I0610 12:32:12.225760    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:32:12.228048    8536 logs.go:276] 2 containers: [3bee53d5fef9 f1409bf44ff1]
	I0610 12:32:12.237371    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 12:32:12.265667    8536 command_runner.go:130] > c3c4316beca6
	I0610 12:32:12.265667    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:32:12.265667    8536 logs.go:276] 2 containers: [c3c4316beca6 c39d54960e7d]
	I0610 12:32:12.265667    8536 logs.go:123] Gathering logs for kube-scheduler [bd1a6cd98743] ...
	I0610 12:32:12.265667    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1a6cd98743"
	I0610 12:32:12.295683    8536 command_runner.go:130] ! I0610 12:07:55.711360       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:12.296495    8536 command_runner.go:130] ! W0610 12:07:57.417322       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:12.296495    8536 command_runner.go:130] ! W0610 12:07:57.417963       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.296630    8536 command_runner.go:130] ! W0610 12:07:57.418046       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:12.296660    8536 command_runner.go:130] ! W0610 12:07:57.418071       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:12.296660    8536 command_runner.go:130] ! I0610 12:07:57.459055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:12.296660    8536 command_runner.go:130] ! I0610 12:07:57.460659       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:12.296731    8536 command_runner.go:130] ! I0610 12:07:57.464904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:12.296731    8536 command_runner.go:130] ! I0610 12:07:57.464952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:12.296731    8536 command_runner.go:130] ! I0610 12:07:57.466483       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:12.296803    8536 command_runner.go:130] ! I0610 12:07:57.466650       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:12.296803    8536 command_runner.go:130] ! W0610 12:07:57.502453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:12.296875    8536 command_runner.go:130] ! E0610 12:07:57.507264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:12.296875    8536 command_runner.go:130] ! W0610 12:07:57.503672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.296941    8536 command_runner.go:130] ! W0610 12:07:57.506076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:12.296941    8536 command_runner.go:130] ! W0610 12:07:57.506243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297028    8536 command_runner.go:130] ! W0610 12:07:57.506320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:12.297051    8536 command_runner.go:130] ! W0610 12:07:57.506362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297082    8536 command_runner.go:130] ! W0610 12:07:57.506402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:12.297082    8536 command_runner.go:130] ! W0610 12:07:57.506651       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.297151    8536 command_runner.go:130] ! W0610 12:07:57.506722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:12.297151    8536 command_runner.go:130] ! W0610 12:07:57.507113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297284    8536 command_runner.go:130] ! W0610 12:07:57.507193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:12.297284    8536 command_runner.go:130] ! E0610 12:07:57.511548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297284    8536 command_runner.go:130] ! E0610 12:07:57.511795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:12.297371    8536 command_runner.go:130] ! E0610 12:07:57.512240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297371    8536 command_runner.go:130] ! E0610 12:07:57.512647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:12.297371    8536 command_runner.go:130] ! E0610 12:07:57.515128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297455    8536 command_runner.go:130] ! E0610 12:07:57.515218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:12.297455    8536 command_runner.go:130] ! E0610 12:07:57.515698       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.516017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.516332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.516529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:57.537276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.537491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:57.537680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.538611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:57.537622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.538734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:57.538013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.539237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:58.345815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:58.345914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:58.356843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:12.298174    8536 command_runner.go:130] ! E0610 12:07:58.357045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.406587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.406863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.426795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:12.299216    8536 command_runner.go:130] ! W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:12.299278    8536 command_runner.go:130] ! E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:12.299361    8536 command_runner.go:130] ! W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:12.299394    8536 command_runner.go:130] ! E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:12.299423    8536 command_runner.go:130] ! W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.299486    8536 command_runner.go:130] ! E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.299486    8536 command_runner.go:130] ! W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:12.299551    8536 command_runner.go:130] ! E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:12.299551    8536 command_runner.go:130] ! W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:12.299607    8536 command_runner.go:130] ! E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:12.299642    8536 command_runner.go:130] ! I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:12.299674    8536 command_runner.go:130] ! E0610 12:28:16.298586       1 run.go:74] "command failed" err="finished without leader elect"
	I0610 12:32:12.311166    8536 logs.go:123] Gathering logs for kube-proxy [1de5fa0ef838] ...
	I0610 12:32:12.311166    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1de5fa0ef838"
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.254962       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.294630       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.150.144"]
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.403290       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.403338       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.403357       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:12.342150    8536 command_runner.go:130] ! I0610 12:31:02.416009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:12.342172    8536 command_runner.go:130] ! I0610 12:31:02.416300       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:12.342172    8536 command_runner.go:130] ! I0610 12:31:02.416345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:12.342172    8536 command_runner.go:130] ! I0610 12:31:02.424657       1 config.go:192] "Starting service config controller"
	I0610 12:32:12.342172    8536 command_runner.go:130] ! I0610 12:31:02.425325       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:12.342233    8536 command_runner.go:130] ! I0610 12:31:02.425369       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:12.342259    8536 command_runner.go:130] ! I0610 12:31:02.425382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:12.342259    8536 command_runner.go:130] ! I0610 12:31:02.432037       1 config.go:319] "Starting node config controller"
	I0610 12:32:12.342259    8536 command_runner.go:130] ! I0610 12:31:02.432075       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:12.342259    8536 command_runner.go:130] ! I0610 12:31:02.535663       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:12.342317    8536 command_runner.go:130] ! I0610 12:31:02.535744       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:12.342317    8536 command_runner.go:130] ! I0610 12:31:02.535786       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:12.344172    8536 logs.go:123] Gathering logs for kindnet [c39d54960e7d] ...
	I0610 12:32:12.344172    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39d54960e7d"
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:45.866152       1 main.go:227] handling current node
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:45.866170       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:45.866178       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:55.883210       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:55.883426       1 main.go:227] handling current node
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:55.883562       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:55.883686       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:13:05.893577       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379133    8536 command_runner.go:130] ! I0610 12:13:05.893734       1 main.go:227] handling current node
	I0610 12:32:12.379133    8536 command_runner.go:130] ! I0610 12:13:05.893787       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379133    8536 command_runner.go:130] ! I0610 12:13:05.893797       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379133    8536 command_runner.go:130] ! I0610 12:13:15.902454       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379190    8536 command_runner.go:130] ! I0610 12:13:15.902590       1 main.go:227] handling current node
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:15.902606       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:15.902614       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:25.917172       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:25.917277       1 main.go:227] handling current node
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:25.917297       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379299    8536 command_runner.go:130] ! I0610 12:13:25.917305       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379299    8536 command_runner.go:130] ! I0610 12:13:35.933505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379327    8536 command_runner.go:130] ! I0610 12:13:35.933609       1 main.go:227] handling current node
	I0610 12:32:12.379327    8536 command_runner.go:130] ! I0610 12:13:35.933623       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379327    8536 command_runner.go:130] ! I0610 12:13:35.933630       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379327    8536 command_runner.go:130] ! I0610 12:13:45.943963       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379392    8536 command_runner.go:130] ! I0610 12:13:45.944071       1 main.go:227] handling current node
	I0610 12:32:12.379392    8536 command_runner.go:130] ! I0610 12:13:45.944089       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379392    8536 command_runner.go:130] ! I0610 12:13:45.944114       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379392    8536 command_runner.go:130] ! I0610 12:13:55.953212       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379453    8536 command_runner.go:130] ! I0610 12:13:55.953354       1 main.go:227] handling current node
	I0610 12:32:12.379453    8536 command_runner.go:130] ! I0610 12:13:55.953371       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379478    8536 command_runner.go:130] ! I0610 12:13:55.953380       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379478    8536 command_runner.go:130] ! I0610 12:14:05.959968       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:05.960014       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:05.960029       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:05.960036       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:15.970279       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:15.970375       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:15.970391       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:15.970399       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:25.977769       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:25.977865       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:25.977880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:25.977886       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:35.984527       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:35.984582       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:35.984596       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:35.984604       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:46.000499       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:46.000612       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:46.000635       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380247    8536 command_runner.go:130] ! I0610 12:14:46.000650       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.380247    8536 command_runner.go:130] ! I0610 12:14:56.007468       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:14:56.007626       1 main.go:227] handling current node
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:14:56.007642       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:14:56.007651       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:15:06.022181       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:15:06.022286       1 main.go:227] handling current node
	I0610 12:32:12.380592    8536 command_runner.go:130] ! I0610 12:15:06.022302       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380592    8536 command_runner.go:130] ! I0610 12:15:06.022312       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.380592    8536 command_runner.go:130] ! I0610 12:15:16.038901       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.380701    8536 command_runner.go:130] ! I0610 12:15:16.038992       1 main.go:227] handling current node
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:16.039008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:16.039016       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:26.062184       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:26.062279       1 main.go:227] handling current node
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:26.062296       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:26.062304       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381307    8536 command_runner.go:130] ! I0610 12:15:36.071408       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:36.071540       1 main.go:227] handling current node
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:36.071556       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:36.071564       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:46.078051       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:46.078158       1 main.go:227] handling current node
	I0610 12:32:12.381476    8536 command_runner.go:130] ! I0610 12:15:46.078176       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.381476    8536 command_runner.go:130] ! I0610 12:15:46.078184       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381563    8536 command_runner.go:130] ! I0610 12:15:56.086545       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.381752    8536 command_runner.go:130] ! I0610 12:15:56.086647       1 main.go:227] handling current node
	I0610 12:32:12.381752    8536 command_runner.go:130] ! I0610 12:15:56.086663       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.381752    8536 command_runner.go:130] ! I0610 12:15:56.086671       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381752    8536 command_runner.go:130] ! I0610 12:16:06.094871       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.381850    8536 command_runner.go:130] ! I0610 12:16:06.094920       1 main.go:227] handling current node
	I0610 12:32:12.381881    8536 command_runner.go:130] ! I0610 12:16:06.094935       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.381881    8536 command_runner.go:130] ! I0610 12:16:06.094958       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381881    8536 command_runner.go:130] ! I0610 12:16:16.109713       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:16.110282       1 main.go:227] handling current node
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:16.110679       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:16.110879       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:26.124392       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:26.124492       1 main.go:227] handling current node
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:26.124507       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:26.124514       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383431    8536 command_runner.go:130] ! I0610 12:16:36.130696       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383431    8536 command_runner.go:130] ! I0610 12:16:36.130864       1 main.go:227] handling current node
	I0610 12:32:12.383474    8536 command_runner.go:130] ! I0610 12:16:36.130880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383474    8536 command_runner.go:130] ! I0610 12:16:36.130888       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383512    8536 command_runner.go:130] ! I0610 12:16:46.145505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383537    8536 command_runner.go:130] ! I0610 12:16:46.145897       1 main.go:227] handling current node
	I0610 12:32:12.383568    8536 command_runner.go:130] ! I0610 12:16:46.146067       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383568    8536 command_runner.go:130] ! I0610 12:16:46.146083       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383568    8536 command_runner.go:130] ! I0610 12:16:56.160466       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:16:56.160571       1 main.go:227] handling current node
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:16:56.160586       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:16:56.160594       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:17:06.173930       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:17:06.173977       1 main.go:227] handling current node
	I0610 12:32:12.383695    8536 command_runner.go:130] ! I0610 12:17:06.173992       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383695    8536 command_runner.go:130] ! I0610 12:17:06.173999       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383695    8536 command_runner.go:130] ! I0610 12:17:16.180797       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383695    8536 command_runner.go:130] ! I0610 12:17:16.180971       1 main.go:227] handling current node
	I0610 12:32:12.383756    8536 command_runner.go:130] ! I0610 12:17:16.181005       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383756    8536 command_runner.go:130] ! I0610 12:17:16.181031       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383804    8536 command_runner.go:130] ! I0610 12:17:26.197081       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383826    8536 command_runner.go:130] ! I0610 12:17:26.197184       1 main.go:227] handling current node
	I0610 12:32:12.383826    8536 command_runner.go:130] ! I0610 12:17:26.197201       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383826    8536 command_runner.go:130] ! I0610 12:17:26.197210       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:36.204586       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:36.204700       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:36.204716       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:36.204725       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:46.214904       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:46.215024       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:46.215040       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:46.215048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:56.228072       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:56.228173       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:56.228189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:56.228197       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:06.237192       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:06.237303       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:06.237329       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:06.237354       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:16.244574       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:16.244799       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:16.244837       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:16.244863       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:26.258608       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:26.258654       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:26.258669       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.384400    8536 command_runner.go:130] ! I0610 12:18:26.258676       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.385009    8536 command_runner.go:130] ! I0610 12:18:36.264620       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.385009    8536 command_runner.go:130] ! I0610 12:18:36.264824       1 main.go:227] handling current node
	I0610 12:32:12.385009    8536 command_runner.go:130] ! I0610 12:18:36.264841       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.385009    8536 command_runner.go:130] ! I0610 12:18:36.264850       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:46.275317       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:46.275426       1 main.go:227] handling current node
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:46.275460       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:46.275469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:56.290965       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:56.291027       1 main.go:227] handling current node
	I0610 12:32:12.386060    8536 command_runner.go:130] ! I0610 12:18:56.291041       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386060    8536 command_runner.go:130] ! I0610 12:18:56.291048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:06.298370       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:06.298512       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:06.298529       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:06.298537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:16.309110       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:16.309215       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:16.309232       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:16.309240       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:26.322583       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:26.322633       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:26.322647       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:26.322654       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:36.336250       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:36.336376       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:36.336392       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:36.336400       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:46.350996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:46.351137       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:46.351155       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:46.351164       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:56.356996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:56.357039       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:56.357052       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:56.357059       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:06.372114       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:06.372883       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:06.373032       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:06.373062       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:16.381023       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:16.381690       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:16.381940       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:16.381975       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:26.389178       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:26.389224       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:26.389240       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:26.389247       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:36.395687       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:36.395828       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:36.395844       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:36.395851       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:46.410656       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:46.410865       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:46.410882       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:46.410891       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:56.425296       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:56.425540       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:56.425625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:56.425639       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:06.439346       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:06.439393       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:06.439406       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:06.439413       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:16.450424       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:16.450594       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:16.450628       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:16.450821       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:26.458379       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:26.458487       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:26.458503       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:26.458511       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:36.474243       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:36.474337       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:36.474354       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:36.474362       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:46.486635       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:46.486679       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:46.486693       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:46.486700       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:56.502256       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:56.502361       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:56.502377       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:56.502386       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:06.508796       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:06.508911       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:06.508928       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:06.508957       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:16.523863       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:16.523952       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:16.523970       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:16.523979       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:26.531516       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:26.531621       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:26.531637       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:26.531645       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:36.546403       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:36.546510       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:36.546525       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:36.546533       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388211    8536 command_runner.go:130] ! I0610 12:22:46.603429       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:46.603565       1 main.go:227] handling current node
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:46.603581       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:46.603590       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:56.619134       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:56.619253       1 main.go:227] handling current node
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:56.619287       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:56.619296       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:23:06.634307       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:23:06.634399       1 main.go:227] handling current node
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:23:06.634415       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388392    8536 command_runner.go:130] ! I0610 12:23:06.634424       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388392    8536 command_runner.go:130] ! I0610 12:23:16.649341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388392    8536 command_runner.go:130] ! I0610 12:23:16.649508       1 main.go:227] handling current node
	I0610 12:32:12.388392    8536 command_runner.go:130] ! I0610 12:23:16.649527       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388457    8536 command_runner.go:130] ! I0610 12:23:16.649539       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388457    8536 command_runner.go:130] ! I0610 12:23:26.662421       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388457    8536 command_runner.go:130] ! I0610 12:23:26.662451       1 main.go:227] handling current node
	I0610 12:32:12.388457    8536 command_runner.go:130] ! I0610 12:23:26.662462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:26.662468       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:36.669686       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:36.669734       1 main.go:227] handling current node
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:36.669822       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:36.669831       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:46.678078       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:46.678194       1 main.go:227] handling current node
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:46.678209       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:46.678217       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:56.685841       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388659    8536 command_runner.go:130] ! I0610 12:23:56.685884       1 main.go:227] handling current node
	I0610 12:32:12.388659    8536 command_runner.go:130] ! I0610 12:23:56.685898       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388659    8536 command_runner.go:130] ! I0610 12:23:56.685905       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388732    8536 command_runner.go:130] ! I0610 12:24:06.692341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388732    8536 command_runner.go:130] ! I0610 12:24:06.692609       1 main.go:227] handling current node
	I0610 12:32:12.388732    8536 command_runner.go:130] ! I0610 12:24:06.692699       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388732    8536 command_runner.go:130] ! I0610 12:24:06.692856       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388806    8536 command_runner.go:130] ! I0610 12:24:16.700494       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388806    8536 command_runner.go:130] ! I0610 12:24:16.700609       1 main.go:227] handling current node
	I0610 12:32:12.388806    8536 command_runner.go:130] ! I0610 12:24:16.700625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388806    8536 command_runner.go:130] ! I0610 12:24:16.700633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:26.716495       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:26.716609       1 main.go:227] handling current node
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:26.716625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:26.716633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:36.723606       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:36.723716       1 main.go:227] handling current node
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:36.723733       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:36.724254       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:46.739916       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:46.740008       1 main.go:227] handling current node
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:46.740402       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389004    8536 command_runner.go:130] ! I0610 12:24:46.740432       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389004    8536 command_runner.go:130] ! I0610 12:24:56.759676       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389004    8536 command_runner.go:130] ! I0610 12:24:56.760848       1 main.go:227] handling current node
	I0610 12:32:12.389004    8536 command_runner.go:130] ! I0610 12:24:56.760902       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389059    8536 command_runner.go:130] ! I0610 12:24:56.760914       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389059    8536 command_runner.go:130] ! I0610 12:25:06.771450       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389116    8536 command_runner.go:130] ! I0610 12:25:06.771514       1 main.go:227] handling current node
	I0610 12:32:12.389116    8536 command_runner.go:130] ! I0610 12:25:06.771530       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389116    8536 command_runner.go:130] ! I0610 12:25:06.771537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389116    8536 command_runner.go:130] ! I0610 12:25:16.778338       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389171    8536 command_runner.go:130] ! I0610 12:25:16.778445       1 main.go:227] handling current node
	I0610 12:32:12.389171    8536 command_runner.go:130] ! I0610 12:25:16.778461       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389171    8536 command_runner.go:130] ! I0610 12:25:16.778469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389171    8536 command_runner.go:130] ! I0610 12:25:26.791778       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389230    8536 command_runner.go:130] ! I0610 12:25:26.791933       1 main.go:227] handling current node
	I0610 12:32:12.389230    8536 command_runner.go:130] ! I0610 12:25:26.791950       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389230    8536 command_runner.go:130] ! I0610 12:25:26.791974       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389230    8536 command_runner.go:130] ! I0610 12:25:36.800633       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:36.800842       1 main.go:227] handling current node
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:36.800860       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:36.800869       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:46.815290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:46.815339       1 main.go:227] handling current node
	I0610 12:32:12.389341    8536 command_runner.go:130] ! I0610 12:25:46.815355       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389341    8536 command_runner.go:130] ! I0610 12:25:46.815363       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389341    8536 command_runner.go:130] ! I0610 12:25:56.830374       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389341    8536 command_runner.go:130] ! I0610 12:25:56.830439       1 main.go:227] handling current node
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.830471       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.830478       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.831222       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.831411       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.831494       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:12.389499    8536 command_runner.go:130] ! I0610 12:26:06.840295       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389499    8536 command_runner.go:130] ! I0610 12:26:06.840446       1 main.go:227] handling current node
	I0610 12:32:12.389499    8536 command_runner.go:130] ! I0610 12:26:06.840464       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389499    8536 command_runner.go:130] ! I0610 12:26:06.840913       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:06.845129       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:06.845329       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:16.860365       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:16.860476       1 main.go:227] handling current node
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:16.860493       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389642    8536 command_runner.go:130] ! I0610 12:26:16.860502       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389642    8536 command_runner.go:130] ! I0610 12:26:16.861223       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389642    8536 command_runner.go:130] ! I0610 12:26:16.861379       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.873719       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.873964       1 main.go:227] handling current node
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.874016       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.874181       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.874413       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:26.874451       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:36.881254       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:36.881366       1 main.go:227] handling current node
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:36.881382       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:36.881407       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389818    8536 command_runner.go:130] ! I0610 12:26:36.881814       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389818    8536 command_runner.go:130] ! I0610 12:26:36.881908       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389818    8536 command_runner.go:130] ! I0610 12:26:46.900700       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900797       1 main.go:227] handling current node
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900815       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900823       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900985       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907395       1 main.go:227] handling current node
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907412       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907420       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907548       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907656       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922305       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922349       1 main.go:227] handling current node
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922361       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922367       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922490       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922515       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390138    8536 command_runner.go:130] ! I0610 12:27:16.929579       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390138    8536 command_runner.go:130] ! I0610 12:27:16.929687       1 main.go:227] handling current node
	I0610 12:32:12.390138    8536 command_runner.go:130] ! I0610 12:27:16.929704       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:16.929712       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:16.930550       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:16.930641       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:26.944603       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:26.944719       1 main.go:227] handling current node
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:26.944772       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:26.945138       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:26.945535       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:26.945625       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:36.955188       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:36.955329       1 main.go:227] handling current node
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:36.955462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:36.955581       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:36.955956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:36.956158       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:46.965590       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.965717       1 main.go:227] handling current node
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.965826       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.965836       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.966598       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.966708       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:56.999276       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:56.999553       1 main.go:227] handling current node
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:56.999711       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:56.999728       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:57.000088       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:57.000177       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015069       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015281       1 main.go:227] handling current node
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015300       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015308       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015707       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015928       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.413148    8536 logs.go:123] Gathering logs for dmesg ...
	I0610 12:32:12.413148    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 12:32:12.439745    8536 command_runner.go:130] > [Jun10 12:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.132459] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.024371] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.082449] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.022513] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0610 12:32:12.440283    8536 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +5.764981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +1.334692] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +1.227872] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +7.275008] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0610 12:32:12.440467    8536 command_runner.go:130] > [Jun10 12:30] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	I0610 12:32:12.440551    8536 command_runner.go:130] > [  +0.213819] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	I0610 12:32:12.440551    8536 command_runner.go:130] > [ +29.247267] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.109477] kauditd_printk_skb: 73 callbacks suppressed
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.638576] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.214581] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.255487] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +3.027967] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.239865] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.216732] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +0.314976] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +0.112938] kauditd_printk_skb: 183 callbacks suppressed
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +0.871081] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +5.053506] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +0.123809] kauditd_printk_skb: 34 callbacks suppressed
	I0610 12:32:12.440720    8536 command_runner.go:130] > [Jun10 12:31] kauditd_printk_skb: 62 callbacks suppressed
	I0610 12:32:12.440720    8536 command_runner.go:130] > [  +3.513215] hrtimer: interrupt took 368589 ns
	I0610 12:32:12.440720    8536 command_runner.go:130] > [  +0.107277] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0610 12:32:12.440720    8536 command_runner.go:130] > [  +7.541664] kauditd_printk_skb: 70 callbacks suppressed
	I0610 12:32:12.442368    8536 logs.go:123] Gathering logs for describe nodes ...
	I0610 12:32:12.442368    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 12:32:12.666754    8536 command_runner.go:130] > Name:               multinode-813300
	I0610 12:32:12.666754    8536 command_runner.go:130] > Roles:              control-plane
	I0610 12:32:12.666754    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0610 12:32:12.667755    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:12.667755    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	I0610 12:32:12.667755    8536 command_runner.go:130] > Taints:             <none>
	I0610 12:32:12.667755    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:12.667755    8536 command_runner.go:130] > Lease:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300
	I0610 12:32:12.667755    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:12.667755    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:32:10 +0000
	I0610 12:32:12.667755    8536 command_runner.go:130] > Conditions:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0610 12:32:12.667755    8536 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0610 12:32:12.667755    8536 command_runner.go:130] >   DiskPressure     False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0610 12:32:12.667755    8536 command_runner.go:130] >   PIDPressure      False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Ready            True    Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:31:40 +0000   KubeletReady                 kubelet is posting ready status
	I0610 12:32:12.667755    8536 command_runner.go:130] > Addresses:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   InternalIP:  172.17.150.144
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Hostname:    multinode-813300
	I0610 12:32:12.667755    8536 command_runner.go:130] > Capacity:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.667755    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.667755    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.667755    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.667755    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.667755    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.667755    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.667755    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.667755    8536 command_runner.go:130] > System Info:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Machine ID:                 8363a852b0fa420a8dccb009e6f4f9c7
	I0610 12:32:12.667755    8536 command_runner.go:130] >   System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Boot ID:                    a60b688f-6b78-4fa5-b21e-96a64e5c1047
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:12.667755    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:12.667755    8536 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0610 12:32:12.667755    8536 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0610 12:32:12.667755    8536 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0610 12:32:12.667755    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-z28tq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 etcd-multinode-813300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kindnet-29gbv                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-813300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-813300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kube-proxy-nrpvt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-813300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0610 12:32:12.667755    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Resource           Requests     Limits
	I0610 12:32:12.667755    8536 command_runner.go:130] >   --------           --------     ------
	I0610 12:32:12.667755    8536 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0610 12:32:12.667755    8536 command_runner.go:130] > Events:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-813300 status is now: NodeReady
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:12.668749    8536 command_runner.go:130] > Name:               multinode-813300-m02
	I0610 12:32:12.668749    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:12.668749    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m02
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:12.668749    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:12.668749    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	I0610 12:32:12.668749    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:12.668749    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:12.668749    8536 command_runner.go:130] > Lease:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m02
	I0610 12:32:12.668749    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:12.668749    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:30 +0000
	I0610 12:32:12.668749    8536 command_runner.go:130] > Conditions:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:12.668749    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.668749    8536 command_runner.go:130] > Addresses:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   InternalIP:  172.17.151.128
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Hostname:    multinode-813300-m02
	I0610 12:32:12.668749    8536 command_runner.go:130] > Capacity:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.668749    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.668749    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.668749    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.668749    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.668749    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.668749    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.668749    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.668749    8536 command_runner.go:130] > System Info:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	I0610 12:32:12.668749    8536 command_runner.go:130] >   System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:12.668749    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:12.668749    8536 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0610 12:32:12.668749    8536 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0610 12:32:12.668749    8536 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0610 12:32:12.668749    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-czxmt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0610 12:32:12.668749    8536 command_runner.go:130] >   kube-system                 kindnet-r4nfq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0610 12:32:12.668749    8536 command_runner.go:130] >   kube-system                 kube-proxy-rx2b2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0610 12:32:12.668749    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:12.668749    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:12.668749    8536 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0610 12:32:12.668749    8536 command_runner.go:130] > Events:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-813300-m02 status is now: NodeReady
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeNotReady             3m57s              node-controller  Node multinode-813300-m02 status is now: NodeNotReady
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:12.668749    8536 command_runner.go:130] > Name:               multinode-813300-m03
	I0610 12:32:12.668749    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:12.668749    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m03
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_25_53_0700
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:12.669804    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:12.669804    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:25:52 +0000
	I0610 12:32:12.669804    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:12.669804    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:12.669804    8536 command_runner.go:130] > Lease:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m03
	I0610 12:32:12.669804    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:12.669804    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:04 +0000
	I0610 12:32:12.669804    8536 command_runner.go:130] > Conditions:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:12.669804    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.669804    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.669804    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.669804    8536 command_runner.go:130] > Addresses:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   InternalIP:  172.17.144.46
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Hostname:    multinode-813300-m03
	I0610 12:32:12.669804    8536 command_runner.go:130] > Capacity:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.669804    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.669804    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.669804    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.669804    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.669804    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.669804    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.669804    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.669804    8536 command_runner.go:130] > System Info:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Machine ID:                 2d60e1f6e3b2454db505a650eae61212
	I0610 12:32:12.669804    8536 command_runner.go:130] >   System UUID:                b38b4a9a-39f6-6f43-9e6d-19433dc62cd9
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Boot ID:                    0a419483-5289-4d17-96c2-fd4487360412
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:12.669804    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:12.669804    8536 command_runner.go:130] > PodCIDR:                      10.244.2.0/24
	I0610 12:32:12.669804    8536 command_runner.go:130] > PodCIDRs:                     10.244.2.0/24
	I0610 12:32:12.669804    8536 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0610 12:32:12.669804    8536 command_runner.go:130] >   kube-system                 kindnet-2pc4j       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m20s
	I0610 12:32:12.669804    8536 command_runner.go:130] >   kube-system                 kube-proxy-vw56h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	I0610 12:32:12.669804    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:12.669804    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:12.669804    8536 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0610 12:32:12.669804    8536 command_runner.go:130] > Events:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  Starting                 6m7s                   kube-proxy       
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m20s (x2 over 6m20s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientMemory
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m20s (x2 over 6m20s)  kubelet          Node multinode-813300-m03 status is now: NodeHasNoDiskPressure
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m20s (x2 over 6m20s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientPID
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  RegisteredNode           6m18s                  node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:12.670789    8536 command_runner.go:130] >   Normal  NodeReady                5m59s                  kubelet          Node multinode-813300-m03 status is now: NodeReady
	I0610 12:32:12.670789    8536 command_runner.go:130] >   Normal  NodeNotReady             4m28s                  node-controller  Node multinode-813300-m03 status is now: NodeNotReady
	I0610 12:32:12.670789    8536 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:12.679741    8536 logs.go:123] Gathering logs for kube-apiserver [d7941126134f] ...
	I0610 12:32:12.680767    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7941126134f"
	I0610 12:32:12.725360    8536 command_runner.go:130] ! I0610 12:30:56.783636       1 options.go:221] external host was not specified, using 172.17.150.144
	I0610 12:32:12.725360    8536 command_runner.go:130] ! I0610 12:30:56.802716       1 server.go:148] Version: v1.30.1
	I0610 12:32:12.725429    8536 command_runner.go:130] ! I0610 12:30:56.802771       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:12.725429    8536 command_runner.go:130] ! I0610 12:30:57.206580       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 12:32:12.725429    8536 command_runner.go:130] ! I0610 12:30:57.224598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:12.725491    8536 command_runner.go:130] ! I0610 12:30:57.225809       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 12:32:12.725553    8536 command_runner.go:130] ! I0610 12:30:57.226002       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 12:32:12.725708    8536 command_runner.go:130] ! I0610 12:30:57.226365       1 instance.go:299] Using reconciler: lease
	I0610 12:32:12.725770    8536 command_runner.go:130] ! I0610 12:30:57.637999       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0610 12:32:12.725871    8536 command_runner.go:130] ! W0610 12:30:57.638403       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.725871    8536 command_runner.go:130] ! I0610 12:30:58.007103       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0610 12:32:12.726572    8536 command_runner.go:130] ! I0610 12:30:58.008169       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0610 12:32:12.726572    8536 command_runner.go:130] ! I0610 12:30:58.357732       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0610 12:32:12.726572    8536 command_runner.go:130] ! I0610 12:30:58.553660       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0610 12:32:12.726572    8536 command_runner.go:130] ! I0610 12:30:58.567826       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0610 12:32:12.726572    8536 command_runner.go:130] ! W0610 12:30:58.567936       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726642    8536 command_runner.go:130] ! W0610 12:30:58.567947       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.726694    8536 command_runner.go:130] ! I0610 12:30:58.569137       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0610 12:32:12.726712    8536 command_runner.go:130] ! W0610 12:30:58.569236       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726712    8536 command_runner.go:130] ! I0610 12:30:58.570636       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0610 12:32:12.726745    8536 command_runner.go:130] ! I0610 12:30:58.572063       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0610 12:32:12.726745    8536 command_runner.go:130] ! W0610 12:30:58.572082       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0610 12:32:12.726779    8536 command_runner.go:130] ! W0610 12:30:58.572088       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0610 12:32:12.726808    8536 command_runner.go:130] ! I0610 12:30:58.575154       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0610 12:32:12.726808    8536 command_runner.go:130] ! W0610 12:30:58.575194       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0610 12:32:12.726835    8536 command_runner.go:130] ! I0610 12:30:58.576862       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0610 12:32:12.726869    8536 command_runner.go:130] ! W0610 12:30:58.576966       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726869    8536 command_runner.go:130] ! W0610 12:30:58.576976       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.726869    8536 command_runner.go:130] ! I0610 12:30:58.577920       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0610 12:32:12.726904    8536 command_runner.go:130] ! W0610 12:30:58.578059       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726904    8536 command_runner.go:130] ! W0610 12:30:58.578305       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726941    8536 command_runner.go:130] ! I0610 12:30:58.579295       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0610 12:32:12.726941    8536 command_runner.go:130] ! I0610 12:30:58.581807       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0610 12:32:12.726941    8536 command_runner.go:130] ! W0610 12:30:58.581943       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726941    8536 command_runner.go:130] ! W0610 12:30:58.582127       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727024    8536 command_runner.go:130] ! I0610 12:30:58.583254       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0610 12:32:12.727024    8536 command_runner.go:130] ! W0610 12:30:58.583359       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727024    8536 command_runner.go:130] ! W0610 12:30:58.583370       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727024    8536 command_runner.go:130] ! I0610 12:30:58.594003       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0610 12:32:12.727024    8536 command_runner.go:130] ! W0610 12:30:58.594046       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0610 12:32:12.727024    8536 command_runner.go:130] ! I0610 12:30:58.597008       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0610 12:32:12.727024    8536 command_runner.go:130] ! W0610 12:30:58.597028       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727125    8536 command_runner.go:130] ! W0610 12:30:58.597047       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727125    8536 command_runner.go:130] ! I0610 12:30:58.597658       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0610 12:32:12.727125    8536 command_runner.go:130] ! W0610 12:30:58.597679       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727125    8536 command_runner.go:130] ! W0610 12:30:58.597686       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727125    8536 command_runner.go:130] ! I0610 12:30:58.602889       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0610 12:32:12.727191    8536 command_runner.go:130] ! W0610 12:30:58.602907       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727191    8536 command_runner.go:130] ! W0610 12:30:58.602913       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727191    8536 command_runner.go:130] ! I0610 12:30:58.608646       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0610 12:32:12.727245    8536 command_runner.go:130] ! I0610 12:30:58.610262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0610 12:32:12.727280    8536 command_runner.go:130] ! W0610 12:30:58.610275       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0610 12:32:12.727309    8536 command_runner.go:130] ! W0610 12:30:58.610281       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727309    8536 command_runner.go:130] ! I0610 12:30:58.619816       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0610 12:32:12.727339    8536 command_runner.go:130] ! W0610 12:30:58.619856       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0610 12:32:12.727339    8536 command_runner.go:130] ! W0610 12:30:58.619866       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0610 12:32:12.727379    8536 command_runner.go:130] ! I0610 12:30:58.627044       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0610 12:32:12.727379    8536 command_runner.go:130] ! W0610 12:30:58.627092       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727417    8536 command_runner.go:130] ! W0610 12:30:58.627296       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727417    8536 command_runner.go:130] ! I0610 12:30:58.629017       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0610 12:32:12.727417    8536 command_runner.go:130] ! W0610 12:30:58.629067       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727476    8536 command_runner.go:130] ! I0610 12:30:58.659122       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0610 12:32:12.727500    8536 command_runner.go:130] ! W0610 12:30:58.659244       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727523    8536 command_runner.go:130] ! I0610 12:30:59.341469       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:12.727550    8536 command_runner.go:130] ! I0610 12:30:59.341814       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:12.727550    8536 command_runner.go:130] ! I0610 12:30:59.341806       1 secure_serving.go:213] Serving securely on [::]:8443
	I0610 12:32:12.729353    8536 command_runner.go:130] ! I0610 12:30:59.342486       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0610 12:32:12.729677    8536 command_runner.go:130] ! I0610 12:30:59.342867       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0610 12:32:12.730522    8536 command_runner.go:130] ! I0610 12:30:59.342901       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0610 12:32:12.730522    8536 command_runner.go:130] ! I0610 12:30:59.342987       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0610 12:32:12.730522    8536 command_runner.go:130] ! I0610 12:30:59.341865       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:12.731087    8536 command_runner.go:130] ! I0610 12:30:59.344865       1 controller.go:116] Starting legacy_token_tracking_controller
	I0610 12:32:12.731087    8536 command_runner.go:130] ! I0610 12:30:59.344899       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0610 12:32:12.731087    8536 command_runner.go:130] ! I0610 12:30:59.346737       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0610 12:32:12.731087    8536 command_runner.go:130] ! I0610 12:30:59.346910       1 available_controller.go:423] Starting AvailableConditionController
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.346960       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347078       1 aggregator.go:163] waiting for initial CRD sync...
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347170       1 controller.go:78] Starting OpenAPI AggregationController
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347256       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347656       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347947       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0610 12:32:12.731247    8536 command_runner.go:130] ! I0610 12:30:59.348233       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0610 12:32:12.731247    8536 command_runner.go:130] ! I0610 12:30:59.348295       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.341877       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.377996       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.378109       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.378362       1 controller.go:139] Starting OpenAPI controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.378742       1 controller.go:87] Starting OpenAPI V3 controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.378883       1 naming_controller.go:291] Starting NamingConditionController
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379043       1 establishing_controller.go:76] Starting EstablishingController
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379247       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379438       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379518       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379777       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379999       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.524664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.525326       1 policy_source.go:224] refreshing policies
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.543486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.547084       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.548579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.549972       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.550011       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.551151       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.554229       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.560228       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578343       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578414       1 aggregator.go:165] initial CRD sync complete...
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578429       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578466       1 cache.go:39] Caches are synced for autoregister controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.606740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:00.360768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 12:32:12.731286    8536 command_runner.go:130] ! W0610 12:31:00.893787       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.150.144]
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:00.913283       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:00.933946       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:02.471259       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:02.690867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:02.714405       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:32:12.731820    8536 command_runner.go:130] ! I0610 12:31:02.840117       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 12:32:12.731820    8536 command_runner.go:130] ! I0610 12:31:02.856715       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 12:32:12.740531    8536 logs.go:123] Gathering logs for Docker ...
	I0610 12:32:12.740531    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 12:32:12.768085    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.769173    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:12.769173    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.665734294Z" level=info msg="Starting up"
	I0610 12:32:12.769223    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.666799026Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:12.769268    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.668025832Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0610 12:32:12.769296    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.707077561Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:12.769296    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745342414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:12.769358    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745425201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:12.769358    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745528085Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:12.769397    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745580077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769397    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746319960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.769440    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746463837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746722696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746775088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746796184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746813182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.747203320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.748049086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752393000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752519780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.770079    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752692453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.770098    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752790737Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753305956Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753420338Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753439135Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759080243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759316106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759347801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759374497Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759392594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759476281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759928509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760128877Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760824467Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760850663Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760867361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760883758Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760898556Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760914553Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760935350Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760951047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760966645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770867    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760986442Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761064230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761105323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761128319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761143417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761158215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761173012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761187310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761210007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761455768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761477764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761493962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761507660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761522057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761538755Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761561351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761583448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761598445Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761652437Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761676833Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761691230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761709928Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761721526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761735324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761752021Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762164056Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762290536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762532698Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762557794Z" level=info msg="containerd successfully booted in 0.059804s"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.723660372Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.979070633Z" level=info msg="Loading containers: start."
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.430556665Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.525359393Z" level=info msg="Loading containers: done."
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.555368825Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.556499190Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614621979Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614710469Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.105858304Z" level=info msg="Processing signal 'terminated'"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.107858244Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 systemd[1]: Stopping Docker Application Container Engine...
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109274172Z" level=info msg="Daemon shutdown complete"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109439076Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109591179Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: docker.service: Deactivated successfully.
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Stopped Docker Application Container Engine.
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.200932485Z" level=info msg="Starting up"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.202989526Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:12.772354    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.204789062Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0610 12:32:12.772411    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.250167169Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:12.772480    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291799101Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:12.772480    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291856902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:12.772480    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291930003Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:12.772480    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291948904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772548    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291983304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.772548    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291997405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772606    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292182308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.772606    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292287811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772667    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292310511Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:12.772721    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292322911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772721    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292350212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772756    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292701119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772794    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.295953884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.772832    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296063086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772893    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296411793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.772945    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296455694Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:12.772945    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296587396Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:12.773470    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296721299Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:12.773470    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296741600Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:12.773470    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296941504Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:12.773531    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297027105Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:12.773531    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297046206Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:12.773626    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297078906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:12.773645    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297254610Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:12.773645    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297334111Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:12.773645    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297955024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:12.773719    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298031825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:12.773719    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298071126Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:12.773719    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298090126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:12.773791    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298105527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773791    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298120527Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773791    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298155728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298172828Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298189828Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298204229Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298218329Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298230929Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298260030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298281530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298296531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298318131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298333531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298494735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298514735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298529635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298592837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298610037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298624437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298639137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298652438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298669738Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298693539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298708139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298720839Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298773440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298792441Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298805041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298820841Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298832741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298850742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298862942Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299109447Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299202249Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299272150Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299312051Z" level=info msg="containerd successfully booted in 0.052836s"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.253253712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.287070988Z" level=info msg="Loading containers: start."
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.612574192Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.704084520Z" level=info msg="Loading containers: done."
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733112200Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733256003Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.788468006Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.790252742Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Loaded network plugin cni"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start cri-dockerd grpc backend"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-kbhvv_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c\""
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-z28tq_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d\""
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013449453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013587556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013608856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013775860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.087769538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089579074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089879880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.090133785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183156944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183215145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183227346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183318447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f56cc8af37db0f3fea8de363d927c6924c7ad7e81f4908f6f5c87d6c0db17a61/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244245765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244411968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244427968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244593672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8902dac03acbce14b7e106bff482e591dd574972082943e9adda30969716a707/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b13c0058ce265f3c4b18ec59cbb42b72803807a8d96330756114b2526fffa2de/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611175897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611296299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611337700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.612109315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730665784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730725385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730738886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730907689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848373736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848822145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851216993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851612501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900274973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900404876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900419576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900508378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:59Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830014876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830867993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831086098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831510106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854754571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854918174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.857723530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.858668949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.877394923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878360042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878507645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.879086357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d997d7c306c2a08fab9e0e53bd14a9da495d8b0abdad38c9935489b788eccd/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2dd9b423841c9fee92dc2a884fe8f45fb9dd5b8713214ce8804ac8ced10629d1/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337790622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337963526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337992226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.338102629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.394005846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396505296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396667999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396999105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c19b39e15f6ae82627ffedaf799ef63dd09554d65260dbfc8856b08a4ce7354/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.711733694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712144402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712256705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712964519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.980963328Z" level=info msg="shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981035932Z" level=warning msg="cleaning up after shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981047633Z" level=info msg="cleaning up dead shim" namespace=moby
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1052]: time="2024-06-10T12:31:31.981399154Z" level=info msg="ignoring event" container=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.486941957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487165464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487187665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.488142597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345354892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345592698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345620799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345913706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.511059667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512286197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512437501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512775109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241c4811748facbb85003522d513039c3dfc5b38006b7f1cba90a5e411055e97/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4d124cebb3b3affe7ace090f1a152544207db26621b5b4098cad87e3db47a4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955148547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955266050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955283650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955812861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444723816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444892597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444914895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.445846695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.810555    8536 logs.go:123] Gathering logs for kube-proxy [afad8b05897e] ...
	I0610 12:32:12.810555    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 afad8b05897e"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:12.843904    8536 logs.go:123] Gathering logs for container status ...
	I0610 12:32:12.843904    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 12:32:12.912460    8536 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0610 12:32:12.912460    8536 command_runner.go:130] > b9550940a81ca       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   c4d124cebb3b3       busybox-fc5497c4f-z28tq
	I0610 12:32:12.912460    8536 command_runner.go:130] > 24f3f7e041f98       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   241c4811748fa       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:12.912460    8536 command_runner.go:130] > e934ffe0f9032       6e38f40d628db                                                                                         25 seconds ago       Running             storage-provisioner       2                   2dd9b423841c9       storage-provisioner
	I0610 12:32:12.912460    8536 command_runner.go:130] > c3c4316beca64       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   0c19b39e15f6a       kindnet-29gbv
	I0610 12:32:12.912460    8536 command_runner.go:130] > cc9dbe4aa4005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   2dd9b423841c9       storage-provisioner
	I0610 12:32:12.912460    8536 command_runner.go:130] > 1de5fa0ef8384       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   06d997d7c306c       kube-proxy-nrpvt
	I0610 12:32:12.912460    8536 command_runner.go:130] > d7941126134f2       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   5c3da3b59b527       kube-apiserver-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > 877ee07c14997       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   b13c0058ce265       etcd-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > d90e72ef46704       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   8902dac03acbc       kube-scheduler-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > 3bee53d5fef91       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   f56cc8af37db0       kube-controller-manager-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > 91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	I0610 12:32:12.912460    8536 command_runner.go:130] > f2e39052db195       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:12.912460    8536 command_runner.go:130] > c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	I0610 12:32:12.912460    8536 command_runner.go:130] > afad8b05897e5       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	I0610 12:32:12.912460    8536 command_runner.go:130] > bd1a6cd987430       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > f1409bf44ff14       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	I0610 12:32:12.915441    8536 logs.go:123] Gathering logs for etcd [877ee07c1499] ...
	I0610 12:32:12.915441    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877ee07c1499"
	I0610 12:32:12.950726    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.207374Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:12.950726    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208407Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.150.144:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.150.144:2380","--initial-cluster=multinode-813300=https://172.17.150.144:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.150.144:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.150.144:2380","--name=multinode-813300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0610 12:32:12.950726    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208499Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0610 12:32:12.951546    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.208577Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:12.951546    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208593Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.150.144:2380"]}
	I0610 12:32:12.951610    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208715Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:12.951610    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.218326Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"]}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.22047Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-813300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"ini
tial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.244201Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.944438ms"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.274404Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.303075Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","commit-index":1913}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=()"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became follower at term 2"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8f4442f54c46fb8d [peers: [], term: 2, commit: 1913, applied: 0, lastindex: 1913, lastterm: 2]"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.318917Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.323726Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1273}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.328272Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1642}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.335671Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.347777Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8f4442f54c46fb8d","timeout":"7s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.349755Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8f4442f54c46fb8d"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.350228Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8f4442f54c46fb8d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.352715Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.36067Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361057Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361302Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=(10323449867154160525)"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363612Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","added-peer-id":"8f4442f54c46fb8d","added-peer-peer-urls":["https://172.17.159.171:2380"]}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364067Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.367772Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:12.952272    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.373962Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.150.144:2380"}
	I0610 12:32:12.952318    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.374209Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.150.144:2380"}
	I0610 12:32:12.952364    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375497Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8f4442f54c46fb8d","initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0610 12:32:12.952413    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375805Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0610 12:32:12.952468    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d is starting a new election at term 2"}
	I0610 12:32:12.952468    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.50539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became pre-candidate at term 2"}
	I0610 12:32:12.952468    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgPreVoteResp from 8f4442f54c46fb8d at term 2"}
	I0610 12:32:12.952468    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became candidate at term 3"}
	I0610 12:32:12.952550    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgVoteResp from 8f4442f54c46fb8d at term 3"}
	I0610 12:32:12.952582    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 3"}
	I0610 12:32:12.952582    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 3"}
	I0610 12:32:12.952582    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.511486Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.150.144:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	I0610 12:32:12.952651    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:12.952690    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:12.952690    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.517481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0610 12:32:12.952690    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0610 12:32:12.952742    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520973Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0610 12:32:12.952742    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.543402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.150.144:2379"}
	I0610 12:32:12.963241    8536 logs.go:123] Gathering logs for coredns [f2e39052db19] ...
	I0610 12:32:12.963241    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e39052db19"
	I0610 12:32:12.997242    8536 command_runner.go:130] > .:53
	I0610 12:32:12.998246    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:12.998310    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:12.998310    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 127.0.0.1:46276 - 35337 "HINFO IN 965239639799927989.3587586823131848737. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.052340371s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:36040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003047s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:51901 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.165635405s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:38890 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.065664181s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:40219 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.107303974s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002396s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:57966 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001307s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38338 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002131s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:41898 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000121s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:49043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200101s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:53918 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.147842886s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:50531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001726s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:41881 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001246s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:34708 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.030026838s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002834s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:58166 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001901s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:46174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001048s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:52212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003513s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	I0610 12:32:12.998887    8536 command_runner.go:130] > [INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	I0610 12:32:12.998967    8536 command_runner.go:130] > [INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	I0610 12:32:12.999111    8536 command_runner.go:130] > [INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	I0610 12:32:12.999210    8536 command_runner.go:130] > [INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	I0610 12:32:12.999272    8536 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	I0610 12:32:12.999272    8536 command_runner.go:130] > [INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	I0610 12:32:12.999272    8536 command_runner.go:130] > [INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	I0610 12:32:12.999272    8536 command_runner.go:130] > [INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0610 12:32:13.002280    8536 logs.go:123] Gathering logs for kube-scheduler [d90e72ef4670] ...
	I0610 12:32:13.002345    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90e72ef4670"
	I0610 12:32:13.034151    8536 command_runner.go:130] ! I0610 12:30:56.811878       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:13.035221    8536 command_runner.go:130] ! W0610 12:30:59.481898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:13.035221    8536 command_runner.go:130] ! W0610 12:30:59.482123       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:13.035221    8536 command_runner.go:130] ! W0610 12:30:59.482217       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:13.035221    8536 command_runner.go:130] ! W0610 12:30:59.482255       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.514164       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.514266       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.518405       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.518496       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.518958       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.519337       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.619122       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:13.038028    8536 logs.go:123] Gathering logs for kube-controller-manager [f1409bf44ff1] ...
	I0610 12:32:13.038028    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1409bf44ff1"
	I0610 12:32:13.076581    8536 command_runner.go:130] ! I0610 12:07:55.502430       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:13.076581    8536 command_runner.go:130] ! I0610 12:07:56.114557       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:13.076707    8536 command_runner.go:130] ! I0610 12:07:56.114858       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.076707    8536 command_runner.go:130] ! I0610 12:07:56.117078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:13.076707    8536 command_runner.go:130] ! I0610 12:07:56.117365       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:13.076707    8536 command_runner.go:130] ! I0610 12:07:56.118392       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:13.076772    8536 command_runner.go:130] ! I0610 12:07:56.118623       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:13.076802    8536 command_runner.go:130] ! I0610 12:08:00.413505       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:13.076802    8536 command_runner.go:130] ! I0610 12:08:00.413532       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:13.076837    8536 command_runner.go:130] ! I0610 12:08:00.454038       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:13.076873    8536 command_runner.go:130] ! I0610 12:08:00.454303       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:13.076873    8536 command_runner.go:130] ! I0610 12:08:00.454341       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:13.076921    8536 command_runner.go:130] ! I0610 12:08:00.474947       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:13.076921    8536 command_runner.go:130] ! I0610 12:08:00.475105       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:00.475116       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:00.514703       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.509914       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.510020       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.511115       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.511148       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.515475       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.515547       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.516308       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.516334       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.516340       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.531416       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.531434       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.531293       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.543960       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.544630       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.544667       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.567000       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.567602       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.568240       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.586627       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.587637       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.587654       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.623685       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.623975       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.624342       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.639985       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.640617       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.640810       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.702326       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.706246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.711937       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:13.077479    8536 command_runner.go:130] ! I0610 12:08:10.712131       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.712146       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.712235       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.712265       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.724980       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.726393       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.726653       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:13.077601    8536 command_runner.go:130] ! I0610 12:08:10.742390       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:13.077626    8536 command_runner.go:130] ! I0610 12:08:10.743099       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:13.077626    8536 command_runner.go:130] ! I0610 12:08:10.744498       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:13.077626    8536 command_runner.go:130] ! I0610 12:08:10.759177       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:13.077626    8536 command_runner.go:130] ! I0610 12:08:10.759262       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:13.077707    8536 command_runner.go:130] ! I0610 12:08:10.759917       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:13.077707    8536 command_runner.go:130] ! I0610 12:08:10.759932       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:13.077707    8536 command_runner.go:130] ! I0610 12:08:10.901245       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:13.077707    8536 command_runner.go:130] ! I0610 12:08:10.903470       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:13.077791    8536 command_runner.go:130] ! I0610 12:08:10.903502       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:13.077791    8536 command_runner.go:130] ! I0610 12:08:11.064066       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:13.077791    8536 command_runner.go:130] ! I0610 12:08:11.064123       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:13.077872    8536 command_runner.go:130] ! I0610 12:08:11.064135       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:13.077872    8536 command_runner.go:130] ! I0610 12:08:11.202164       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:13.077925    8536 command_runner.go:130] ! I0610 12:08:11.202227       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:13.077925    8536 command_runner.go:130] ! I0610 12:08:11.202239       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:13.077960    8536 command_runner.go:130] ! I0610 12:08:11.352380       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:13.077997    8536 command_runner.go:130] ! I0610 12:08:11.352546       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:13.078037    8536 command_runner.go:130] ! I0610 12:08:11.352575       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:13.078037    8536 command_runner.go:130] ! I0610 12:08:11.656918       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:13.078037    8536 command_runner.go:130] ! I0610 12:08:11.657560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:13.078109    8536 command_runner.go:130] ! I0610 12:08:11.657950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:13.078109    8536 command_runner.go:130] ! I0610 12:08:11.658269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:13.078109    8536 command_runner.go:130] ! I0610 12:08:11.658437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:13.078169    8536 command_runner.go:130] ! I0610 12:08:11.658699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:13.078209    8536 command_runner.go:130] ! I0610 12:08:11.658785       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:13.078288    8536 command_runner.go:130] ! I0610 12:08:11.658822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:13.078288    8536 command_runner.go:130] ! I0610 12:08:11.658849       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:13.078288    8536 command_runner.go:130] ! I0610 12:08:11.658870       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:13.078343    8536 command_runner.go:130] ! I0610 12:08:11.658895       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.658915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.658950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.658987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659056       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! W0610 12:08:11.659073       1 shared_informer.go:597] resyncPeriod 13h6m28.341601393s is smaller than resyncCheckPeriod 19h0m49.916968618s and the informer has already started. Changing it to 19h0m49.916968618s
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659195       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659214       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659236       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659287       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659312       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659579       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659591       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.895313       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.895383       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.895693       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.896490       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.154521       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.154576       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.154658       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.154690       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.301351       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.301495       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.301508       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.495309       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.495425       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.495645       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.495683       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:13.078393    8536 command_runner.go:130] ! E0610 12:08:12.550245       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.550671       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:13.078930    8536 command_runner.go:130] ! E0610 12:08:12.700493       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:13.078930    8536 command_runner.go:130] ! I0610 12:08:12.700528       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.700538       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.859280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.859580       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.859953       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.906626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:13.079123    8536 command_runner.go:130] ! I0610 12:08:12.907724       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:13.079123    8536 command_runner.go:130] ! I0610 12:08:13.050431       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:13.079123    8536 command_runner.go:130] ! I0610 12:08:13.050510       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:13.079123    8536 command_runner.go:130] ! I0610 12:08:13.205885       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:13.079217    8536 command_runner.go:130] ! I0610 12:08:13.205970       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.205982       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.351713       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.351815       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.351830       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.603420       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.603498       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.603510       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.750262       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.750789       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.750809       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.900118       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.900639       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.900897       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.054008       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.054156       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.054170       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.199527       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.199627       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.199683       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.199694       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.351474       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.352193       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.352213       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502148       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502250       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502262       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502696       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502825       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.546684       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.547077       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.547608       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.547097       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.547127       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:13.079792    8536 command_runner.go:130] ! I0610 12:08:14.547931       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:13.079839    8536 command_runner.go:130] ! I0610 12:08:14.547138       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.547188       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.548434       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.547199       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.547257       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.548692       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.547265       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.558628       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.590023       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.600506       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.602694       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.603324       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.609611       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.612038       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.623629       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.624495       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.612329       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.628289       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.630516       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.630648       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.622860       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.627541       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.627554       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.627562       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.627813       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.631141       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.631364       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.631669       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.631834       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.642451       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.644828       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.645380       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.647678       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.648798       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.648809       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.648848       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:13.080400    8536 command_runner.go:130] ! I0610 12:08:14.656075       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:13.080400    8536 command_runner.go:130] ! I0610 12:08:14.656781       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:13.080400    8536 command_runner.go:130] ! I0610 12:08:14.657449       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:13.080400    8536 command_runner.go:130] ! I0610 12:08:14.657643       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.658125       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.661079       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.668926       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.675620       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.680953       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300" podCIDRs=["10.244.0.0/24"]
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.687842       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:13.080541    8536 command_runner.go:130] ! I0610 12:08:14.751377       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.754827       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.795731       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.803976       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.807376       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.807800       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:14.851108       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:14.858915       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:14.859692       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:14.864873       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:15.295934       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:15.296041       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:15.332772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:15.887603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="329.520484ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.024148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.478301ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.151441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.784808ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.151859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="288.402µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.577624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.03545ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.593339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.556101ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.593508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.3µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:30.535681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:30.566310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.4µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:32:13.081175    8536 command_runner.go:130] ! I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:32:13.081175    8536 command_runner.go:130] ! I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:32:13.081175    8536 command_runner.go:130] ! I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:32:13.081234    8536 command_runner.go:130] ! I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:32:13.081234    8536 command_runner.go:130] ! I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:32:13.081234    8536 command_runner.go:130] ! I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	I0610 12:32:13.081234    8536 command_runner.go:130] ! I0610 12:25:52.678571       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:13.081334    8536 command_runner.go:130] ! I0610 12:25:52.681612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:13.081334    8536 command_runner.go:130] ! I0610 12:25:52.698797       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m03" podCIDRs=["10.244.2.0/24"]
	I0610 12:32:13.081415    8536 command_runner.go:130] ! I0610 12:25:54.878967       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:13.081436    8536 command_runner.go:130] ! I0610 12:26:13.380155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:13.081436    8536 command_runner.go:130] ! I0610 12:27:44.944679       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:13.081436    8536 command_runner.go:130] ! I0610 12:28:15.516170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.644756ms"
	I0610 12:32:13.081436    8536 command_runner.go:130] ! I0610 12:28:15.516815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.1µs"
	I0610 12:32:13.099511    8536 logs.go:123] Gathering logs for kindnet [c3c4316beca6] ...
	I0610 12:32:13.099511    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c4316beca6"
	I0610 12:32:13.135489    8536 command_runner.go:130] ! I0610 12:31:02.264969       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:02.265572       1 main.go:107] hostIP = 172.17.150.144
	I0610 12:32:13.136494    8536 command_runner.go:130] ! podIP = 172.17.150.144
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:02.265708       1 main.go:116] setting mtu 1500 for CNI 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:02.265761       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:02.265778       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.684223       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.703397       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.703595       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.742189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.742230       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.742783       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.151.128 Flags: [] Table: 0} 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.743097       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.743120       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.743193       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750326       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750472       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750487       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750494       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750648       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750678       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767023       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767174       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767191       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767199       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767842       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767929       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.782886       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.782992       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.783008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.783073       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.783951       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.784044       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.799859       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.799956       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.799981       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.799989       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.800455       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.800616       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.139473    8536 logs.go:123] Gathering logs for kubelet ...
	I0610 12:32:13.139473    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 12:32:13.175674    8536 command_runner.go:130] > Jun 10 12:30:48 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.175674    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322075    1392 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:13.175674    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322142    1392 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.175674    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.324143    1392 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:13.175836    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: E0610 12:30:49.325228    1392 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:13.175836    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:13.175923    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:13.175923    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:13.175923    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.175994    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.176022    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078361    1448 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:13.176022    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078445    1448 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078696    1448 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: E0610 12:30:50.078819    1448 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:53 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021338    1528 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021853    1528 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.022286    1528 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.024650    1528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.040752    1528 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.082883    1528 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.083180    1528 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085143    1528 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085256    1528 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-813300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.086924    1528 topology_manager.go:138] "Creating topology manager with none policy"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.087122    1528 container_manager_linux.go:301] "Creating device plugin manager"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.088486    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.090915    1528 kubelet.go:400] "Attempting to sync node with API server"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091108    1528 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091402    1528 kubelet.go:312] "Adding apiserver pod source"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.092259    1528 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.097253    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.097520    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.099693    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.099740    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.176595    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.099843    1528 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0610 12:32:13.176595    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.102710    1528 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0610 12:32:13.176641    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.103981    1528 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0610 12:32:13.176641    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.107194    1528 server.go:1264] "Started kubelet"
	I0610 12:32:13.176641    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.120692    1528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0610 12:32:13.176917    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.122088    1528 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0610 12:32:13.176979    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.125028    1528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.128857    1528 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.132449    1528 server.go:455] "Adding debug handlers to kubelet server"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.124281    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.137444    1528 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.139221    1528 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.141909    1528 factory.go:221] Registration of the systemd container factory successfully
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147241    1528 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147375    1528 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.144942    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="200ms"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.143108    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.154145    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.179909    1528 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180022    1528 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180086    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181162    1528 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181233    1528 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181261    1528 policy_none.go:49] "None policy: Start"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.192385    1528 reconciler.go:26] "Reconciler: start to sync state"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193179    1528 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193256    1528 state_mem.go:35] "Initializing new in-memory state store"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193830    1528 state_mem.go:75] "Updated machine memory state"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.197194    1528 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.204265    1528 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.219894    1528 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0610 12:32:13.177562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.226098    1528 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-813300\" not found"
	I0610 12:32:13.177562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.226649    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.230123    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231021    1528 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231133    1528 kubelet.go:2337] "Starting kubelet main sync loop"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.231189    1528 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.244084    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.177712    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.247037    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.177712    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.247227    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.177712    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.253607    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.255809    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334683    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62db1c721951a36c62a6369a30c651a661eb2871f8363fa341ef8ad7b7080a07"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334742    1528 topology_manager.go:215] "Topology Admit Handler" podUID="180cf4cc399d604c28cc4df1442ebd5a" podNamespace="kube-system" podName="kube-apiserver-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.336338    1528 topology_manager.go:215] "Topology Admit Handler" podUID="37865ce1914dc04a4a0a25e98b80ce35" podNamespace="kube-system" podName="kube-controller-manager-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.338106    1528 topology_manager.go:215] "Topology Admit Handler" podUID="4d9c84710aef19c4449f4b7691d0af07" podNamespace="kube-system" podName="kube-scheduler-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340794    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7d28a97ba1c48cbe8edd3eab76f64cdcdebf920a03921644f63d12856b642f0"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340848    1528 topology_manager.go:215] "Topology Admit Handler" podUID="76e8893277ba7cea6624561880496e47" podNamespace="kube-system" podName="etcd-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.341927    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04d7b3d4fcc648cd6b447a383defba86200f1071acc892670457ebeebb52f22"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.342208    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bc6043f7b92f091f4ceee7db3e11617072391c6e5303f4ecdafdb06d4b585a"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.356667    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="400ms"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.365771    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.380268    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b6aa9a0e1d1cbcee858808fc74f396cfba20777f2316093484920397e9b4ca"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397846    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-ca-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397877    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:13.178367    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397922    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-flexvolume-dir\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.178443    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397961    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-k8s-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.178540    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397979    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-kubeconfig\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.178582    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398000    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-data\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:13.178582    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398019    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-k8s-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:13.178661    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398038    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-ca-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.178744    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398055    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d9c84710aef19c4449f4b7691d0af07-kubeconfig\") pod \"kube-scheduler-multinode-813300\" (UID: \"4d9c84710aef19c4449f4b7691d0af07\") " pod="kube-system/kube-scheduler-multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398073    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-certs\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.400870    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.416196    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689b8976cc0293bf6ae2ffaf7abbe0a59cfa7521907fd652e86da3912515d25d"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.442360    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10e49596de5e51f9986bebf2105f07084a083e5e8c2ab50684531210b032662"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.454932    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.456598    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.759421    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="800ms"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.858477    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.859580    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.205231    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.205310    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.248476    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.249836    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.406658    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.406731    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.487592    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.561164    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="1.6s"
	I0610 12:32:13.179301    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.661352    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.179301    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.663943    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:13.179349    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.751130    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.179422    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.751205    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:56 multinode-813300 kubelet[1528]: E0610 12:30:56.215699    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:57 multinode-813300 kubelet[1528]: I0610 12:30:57.265569    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636898    1528 kubelet_node_status.go:112] "Node was previously registered" node="multinode-813300"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636993    1528 kubelet_node_status.go:76] "Successfully registered node" node="multinode-813300"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.638685    1528 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639257    1528 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639985    1528 setters.go:580] "Node became not ready" node="multinode-813300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-10T12:30:59Z","lastTransitionTime":"2024-06-10T12:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.103240    1528 apiserver.go:52] "Watching apiserver"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109200    1528 topology_manager.go:215] "Topology Admit Handler" podUID="40bf0aff-00b2-40c7-bed7-52b8cadbc3a1" podNamespace="kube-system" podName="kube-proxy-nrpvt"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109472    1528 topology_manager.go:215] "Topology Admit Handler" podUID="aad8124e-6c05-4719-9adb-edc11b3cce42" podNamespace="kube-system" podName="kindnet-29gbv"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109721    1528 topology_manager.go:215] "Topology Admit Handler" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kbhvv"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109954    1528 topology_manager.go:215] "Topology Admit Handler" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e" podNamespace="kube-system" podName="storage-provisioner"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110077    1528 topology_manager.go:215] "Topology Admit Handler" podUID="3191c71a-8c87-4390-8232-8653f494d1f0" podNamespace="default" podName="busybox-fc5497c4f-z28tq"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.110308    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110641    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-813300" podUID="f824b391-b3d2-49ec-ba7d-863cb2150f81"
	I0610 12:32:13.180016    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.111896    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:13.180016    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.115871    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.180086    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.147565    1528 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.155423    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160314    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6dfedc3-d6ff-412c-8a13-40a493c4199e-tmp\") pod \"storage-provisioner\" (UID: \"f6dfedc3-d6ff-412c-8a13-40a493c4199e\") " pod="kube-system/storage-provisioner"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160428    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-cni-cfg\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-xtables-lock\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161224    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-xtables-lock\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161359    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-lib-modules\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162089    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162182    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.662151031 +0000 UTC m=+6.753273950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.162238    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-lib-modules\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.175000    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-813300"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.186991    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187290    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.687498638 +0000 UTC m=+6.778621457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.246331    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f80d01e953cc664fc05c397fdad000" path="/var/lib/kubelet/pods/93f80d01e953cc664fc05c397fdad000/volumes"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.248399    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa7bd9cfb361baaed8d7d5729a6c77c" path="/var/lib/kubelet/pods/baa7bd9cfb361baaed8d7d5729a6c77c/volumes"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.316426    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-813300" podStartSLOduration=0.316407314 podStartE2EDuration="316.407314ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.316147208 +0000 UTC m=+6.407270027" watchObservedRunningTime="2024-06-10 12:31:00.316407314 +0000 UTC m=+6.407530233"
	I0610 12:32:13.180723    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.439081    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-813300" podStartSLOduration=0.439018164 podStartE2EDuration="439.018164ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.409703778 +0000 UTC m=+6.500826597" watchObservedRunningTime="2024-06-10 12:31:00.439018164 +0000 UTC m=+6.530141083"
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.631684    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667882    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667966    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.667947638 +0000 UTC m=+7.759070557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769226    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769334    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769428    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.769408565 +0000 UTC m=+7.860531384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.231939    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.679952    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.680142    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.680120563 +0000 UTC m=+9.771243482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.781772    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782050    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181342    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782132    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.7821123 +0000 UTC m=+9.873235219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181342    8536 command_runner.go:130] > Jun 10 12:31:02 multinode-813300 kubelet[1528]: E0610 12:31:02.234039    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181342    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.232296    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181466    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.701884    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.181496    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.702058    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.702037863 +0000 UTC m=+13.793160782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.181561    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802160    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181586    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802233    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181635    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802292    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.802272966 +0000 UTC m=+13.893395785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181700    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.207349    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.238069    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:05 multinode-813300 kubelet[1528]: E0610 12:31:05.232753    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:06 multinode-813300 kubelet[1528]: E0610 12:31:06.233804    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.231988    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736592    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736825    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.736801176 +0000 UTC m=+21.827923995 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837037    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837146    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837219    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.837199504 +0000 UTC m=+21.928322423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:08 multinode-813300 kubelet[1528]: E0610 12:31:08.232310    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.208416    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.231620    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:10 multinode-813300 kubelet[1528]: E0610 12:31:10.233882    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:11 multinode-813300 kubelet[1528]: E0610 12:31:11.232126    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:12 multinode-813300 kubelet[1528]: E0610 12:31:12.233695    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:13 multinode-813300 kubelet[1528]: E0610 12:31:13.231660    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.210433    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.182312    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.234870    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182312    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.232790    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182403    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816637    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.182447    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816990    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.816931565 +0000 UTC m=+37.908054384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.182475    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918429    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.182475    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918619    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918694    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.918675278 +0000 UTC m=+38.009798097 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:16 multinode-813300 kubelet[1528]: E0610 12:31:16.234954    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:17 multinode-813300 kubelet[1528]: E0610 12:31:17.231668    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:18 multinode-813300 kubelet[1528]: E0610 12:31:18.232656    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.214153    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.231639    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:20 multinode-813300 kubelet[1528]: E0610 12:31:20.234429    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:21 multinode-813300 kubelet[1528]: E0610 12:31:21.232080    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:22 multinode-813300 kubelet[1528]: E0610 12:31:22.232638    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:23 multinode-813300 kubelet[1528]: E0610 12:31:23.233105    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.216593    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.233280    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:25 multinode-813300 kubelet[1528]: E0610 12:31:25.232513    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:26 multinode-813300 kubelet[1528]: E0610 12:31:26.232337    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.183087    8536 command_runner.go:130] > Jun 10 12:31:27 multinode-813300 kubelet[1528]: E0610 12:31:27.233152    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.183087    8536 command_runner.go:130] > Jun 10 12:31:28 multinode-813300 kubelet[1528]: E0610 12:31:28.234103    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.183193    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.218816    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.232070    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:30 multinode-813300 kubelet[1528]: E0610 12:31:30.231766    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.231673    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884791    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884975    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.884956587 +0000 UTC m=+69.976079506 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985181    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985216    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.98525598 +0000 UTC m=+70.076378799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.232018    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.476305    1528 scope.go:117] "RemoveContainer" containerID="d32ce22e31b06bacb7530f3513c1f864d77685269868404ad7c71a4f15d91e41"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.477175    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.477659    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f6dfedc3-d6ff-412c-8a13-40a493c4199e)\"" pod="kube-system/storage-provisioner" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:33 multinode-813300 kubelet[1528]: E0610 12:31:33.232631    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 kubelet[1528]: I0610 12:31:47.231895    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.214930    1528 scope.go:117] "RemoveContainer" containerID="34b9299d74e382eadb8e7df1029506efc87e283ac8b38024d9524b8bb815f705"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: E0610 12:31:54.266189    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:13.183951    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:13.183951    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:13.183951    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.275663    1528 scope.go:117] "RemoveContainer" containerID="ba52603f8387590319a4d5a9511265065e2f90bff6628bec2f622754e034c70a"
	I0610 12:32:13.230405    8536 logs.go:123] Gathering logs for coredns [24f3f7e041f9] ...
	I0610 12:32:13.230405    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f3f7e041f9"
	I0610 12:32:13.269497    8536 command_runner.go:130] > .:53
	I0610 12:32:13.269497    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:13.269497    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:13.269497    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:13.269497    8536 command_runner.go:130] > [INFO] 127.0.0.1:34387 - 41508 "HINFO IN 7171992165040069679.5605173313288368349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051230172s
	I0610 12:32:13.269497    8536 logs.go:123] Gathering logs for kube-controller-manager [3bee53d5fef9] ...
	I0610 12:32:13.269497    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bee53d5fef9"
	I0610 12:32:13.308290    8536 command_runner.go:130] ! I0610 12:30:56.976566       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:13.308360    8536 command_runner.go:130] ! I0610 12:30:58.260708       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:13.308360    8536 command_runner.go:130] ! I0610 12:30:58.260892       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.308360    8536 command_runner.go:130] ! I0610 12:30:58.266101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:13.308426    8536 command_runner.go:130] ! I0610 12:30:58.267393       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:13.308426    8536 command_runner.go:130] ! I0610 12:30:58.268203       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:13.308426    8536 command_runner.go:130] ! I0610 12:30:58.268377       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:13.308485    8536 command_runner.go:130] ! I0610 12:31:01.430160       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:13.308524    8536 command_runner.go:130] ! I0610 12:31:01.430459       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:13.308571    8536 command_runner.go:130] ! I0610 12:31:01.456745       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:13.308571    8536 command_runner.go:130] ! I0610 12:31:01.457409       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:13.308621    8536 command_runner.go:130] ! I0610 12:31:01.457489       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:13.308648    8536 command_runner.go:130] ! I0610 12:31:01.457839       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:13.308648    8536 command_runner.go:130] ! I0610 12:31:01.509226       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:13.308648    8536 command_runner.go:130] ! I0610 12:31:01.512712       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:13.308648    8536 command_runner.go:130] ! I0610 12:31:01.512947       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:13.308709    8536 command_runner.go:130] ! I0610 12:31:01.517463       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:13.308735    8536 command_runner.go:130] ! I0610 12:31:01.520424       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:13.308735    8536 command_runner.go:130] ! I0610 12:31:01.528150       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:13.308735    8536 command_runner.go:130] ! I0610 12:31:01.528371       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:13.308735    8536 command_runner.go:130] ! I0610 12:31:01.528506       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:13.308801    8536 command_runner.go:130] ! I0610 12:31:01.528651       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:13.308801    8536 command_runner.go:130] ! I0610 12:31:01.533407       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:13.308865    8536 command_runner.go:130] ! I0610 12:31:01.543133       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:13.308890    8536 command_runner.go:130] ! I0610 12:31:01.548293       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:13.308914    8536 command_runner.go:130] ! I0610 12:31:01.548310       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:13.308971    8536 command_runner.go:130] ! I0610 12:31:01.548473       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:13.308993    8536 command_runner.go:130] ! I0610 12:31:01.548492       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:13.308993    8536 command_runner.go:130] ! I0610 12:31:01.548660       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:13.309032    8536 command_runner.go:130] ! I0610 12:31:01.548672       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:13.309032    8536 command_runner.go:130] ! I0610 12:31:01.595194       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:13.309080    8536 command_runner.go:130] ! I0610 12:31:01.595266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:13.309100    8536 command_runner.go:130] ! I0610 12:31:01.595295       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:13.309100    8536 command_runner.go:130] ! I0610 12:31:01.595320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:13.309100    8536 command_runner.go:130] ! I0610 12:31:01.595340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:13.309100    8536 command_runner.go:130] ! I0610 12:31:01.595360       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:13.309203    8536 command_runner.go:130] ! I0610 12:31:01.595381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:13.309230    8536 command_runner.go:130] ! I0610 12:31:01.595402       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595465       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! W0610 12:31:01.595507       1 shared_informer.go:597] resyncPeriod 13h16m37.278540311s is smaller than resyncCheckPeriod 16h53m16.378760609s and the informer has already started. Changing it to 16h53m16.378760609s
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595706       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595754       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595782       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595923       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595956       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597416       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597453       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597489       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597516       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597922       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597937       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.598081       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.614277       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.614469       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.614504       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.618176       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:13.309780    8536 command_runner.go:130] ! I0610 12:31:01.618586       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:13.309825    8536 command_runner.go:130] ! I0610 12:31:01.618885       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:13.309825    8536 command_runner.go:130] ! I0610 12:31:01.623374       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.624235       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.624265       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.629921       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.630154       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.630164       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:13.309941    8536 command_runner.go:130] ! I0610 12:31:01.634130       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:13.309958    8536 command_runner.go:130] ! I0610 12:31:01.634452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:13.309958    8536 command_runner.go:130] ! I0610 12:31:01.634467       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:13.309958    8536 command_runner.go:130] ! I0610 12:31:01.639133       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.639154       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.639163       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.639622       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.639640       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.643940       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.644017       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.644031       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.652714       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.657163       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.657350       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:13.310020    8536 command_runner.go:130] ! E0610 12:31:01.664322       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.664388       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.694061       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.694262       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.694273       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.722911       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.725806       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.726026       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.734788       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.735047       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.735083       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.759990       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.761603       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.761772       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.769963       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.773525       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.773866       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.773998       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.778762       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.778803       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.778833       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.779416       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:13.310552    8536 command_runner.go:130] ! I0610 12:31:01.779429       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:13.310552    8536 command_runner.go:130] ! I0610 12:31:01.779447       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.310595    8536 command_runner.go:130] ! I0610 12:31:01.780731       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:13.310595    8536 command_runner.go:130] ! I0610 12:31:01.782261       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:13.310595    8536 command_runner.go:130] ! I0610 12:31:01.783730       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:13.310654    8536 command_runner.go:130] ! I0610 12:31:01.782277       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.310654    8536 command_runner.go:130] ! I0610 12:31:01.782337       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:13.310654    8536 command_runner.go:130] ! I0610 12:31:01.784928       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:13.310749    8536 command_runner.go:130] ! I0610 12:31:01.782348       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.310749    8536 command_runner.go:130] ! I0610 12:31:11.813253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:13.310749    8536 command_runner.go:130] ! I0610 12:31:11.813374       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:13.310795    8536 command_runner.go:130] ! I0610 12:31:11.813998       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:13.310821    8536 command_runner.go:130] ! I0610 12:31:11.815397       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:13.310821    8536 command_runner.go:130] ! I0610 12:31:11.818405       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:13.310821    8536 command_runner.go:130] ! I0610 12:31:11.818514       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:13.310821    8536 command_runner.go:130] ! I0610 12:31:11.819007       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:13.310889    8536 command_runner.go:130] ! I0610 12:31:11.819350       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:13.310889    8536 command_runner.go:130] ! I0610 12:31:11.821748       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:13.310951    8536 command_runner.go:130] ! I0610 12:31:11.821802       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:13.310951    8536 command_runner.go:130] ! I0610 12:31:11.822113       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:13.310951    8536 command_runner.go:130] ! I0610 12:31:11.822204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:13.311047    8536 command_runner.go:130] ! I0610 12:31:11.822232       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:13.311047    8536 command_runner.go:130] ! I0610 12:31:11.826332       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:13.311047    8536 command_runner.go:130] ! I0610 12:31:11.826815       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:13.311047    8536 command_runner.go:130] ! I0610 12:31:11.826831       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:13.311099    8536 command_runner.go:130] ! E0610 12:31:11.830024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:13.311099    8536 command_runner.go:130] ! I0610 12:31:11.830417       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:13.311143    8536 command_runner.go:130] ! I0610 12:31:11.835752       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:13.311143    8536 command_runner.go:130] ! I0610 12:31:11.836296       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:13.311143    8536 command_runner.go:130] ! I0610 12:31:11.836330       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:13.311143    8536 command_runner.go:130] ! I0610 12:31:11.839311       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:13.311213    8536 command_runner.go:130] ! I0610 12:31:11.839512       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:13.311213    8536 command_runner.go:130] ! I0610 12:31:11.839590       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:13.311213    8536 command_runner.go:130] ! I0610 12:31:11.842028       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:13.311278    8536 command_runner.go:130] ! I0610 12:31:11.842220       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:13.311278    8536 command_runner.go:130] ! I0610 12:31:11.842603       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:13.311302    8536 command_runner.go:130] ! I0610 12:31:11.842639       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.845940       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.846359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.846982       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.849897       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.850381       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.850613       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.853131       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.853418       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.853675       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.856318       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.856441       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.856643       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.856381       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.902405       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.903166       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.906707       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.907117       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.907152       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.910144       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.910388       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.910498       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.913998       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.914276       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.915779       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.916916       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.917975       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.918292       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.930523       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.947621       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.948394       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.948768       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.954911       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:13.311858    8536 command_runner.go:130] ! I0610 12:31:11.957486       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:13.311858    8536 command_runner.go:130] ! I0610 12:31:11.963420       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:13.311858    8536 command_runner.go:130] ! I0610 12:31:11.973610       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:13.311906    8536 command_runner.go:130] ! I0610 12:31:11.979167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:13.311906    8536 command_runner.go:130] ! I0610 12:31:11.980674       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:11.984963       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:11.985188       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:11.994612       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.003389       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.007898       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.011185       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.013303       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.014815       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.016632       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.016812       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.016947       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.017245       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.017927       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.018270       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.019668       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.019818       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.023667       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.024171       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.025888       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.026414       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.026742       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.026899       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.031613       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.035671       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.038980       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:13.312491    8536 command_runner.go:130] ! I0610 12:31:12.040498       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.044612       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.044983       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.048630       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.048809       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.050934       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.051748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.77596ms"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.058669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.911µs"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.061957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.647762ms"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.062771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="326.05µs"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.074892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.074973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.075004       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.075594       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.130853       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.140823       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.147492       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.174418       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.201305       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.218626       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.243193       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.658052       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.658432       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.674720       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:42.085794       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.626500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.481917ms"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.626834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.891µs"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.653330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="217.376µs"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.704393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.856077ms"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.705453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.995µs"
	I0610 12:32:15.836528    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:32:15.843887    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 200:
	ok
	I0610 12:32:15.843887    8536 round_trippers.go:463] GET https://172.17.150.144:8443/version
	I0610 12:32:15.843887    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:15.843887    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:15.843887    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:15.845987    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:15.845987    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Audit-Id: ab9a397a-32bc-4417-a374-81802ca7effc
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:15.847028    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:15.847028    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Content-Length: 263
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:15 GMT
	I0610 12:32:15.847028    8536 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 12:32:15.847182    8536 api_server.go:141] control plane version: v1.30.1
	I0610 12:32:15.847231    8536 api_server.go:131] duration metric: took 3.846962s to wait for apiserver health ...
	I0610 12:32:15.847275    8536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:32:15.858925    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 12:32:15.885859    8536 command_runner.go:130] > d7941126134f
	I0610 12:32:15.885859    8536 logs.go:276] 1 containers: [d7941126134f]
	I0610 12:32:15.901183    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 12:32:15.932778    8536 command_runner.go:130] > 877ee07c1499
	I0610 12:32:15.934782    8536 logs.go:276] 1 containers: [877ee07c1499]
	I0610 12:32:15.944487    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 12:32:15.969000    8536 command_runner.go:130] > 24f3f7e041f9
	I0610 12:32:15.970039    8536 command_runner.go:130] > f2e39052db19
	I0610 12:32:15.970075    8536 logs.go:276] 2 containers: [24f3f7e041f9 f2e39052db19]
	I0610 12:32:15.979387    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 12:32:16.008128    8536 command_runner.go:130] > d90e72ef4670
	I0610 12:32:16.009000    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:32:16.009000    8536 logs.go:276] 2 containers: [d90e72ef4670 bd1a6cd98743]
	I0610 12:32:16.018371    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 12:32:16.040409    8536 command_runner.go:130] > 1de5fa0ef838
	I0610 12:32:16.040409    8536 command_runner.go:130] > afad8b05897e
	I0610 12:32:16.042402    8536 logs.go:276] 2 containers: [1de5fa0ef838 afad8b05897e]
	I0610 12:32:16.052349    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 12:32:16.075451    8536 command_runner.go:130] > 3bee53d5fef9
	I0610 12:32:16.075451    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:32:16.076672    8536 logs.go:276] 2 containers: [3bee53d5fef9 f1409bf44ff1]
	I0610 12:32:16.086234    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 12:32:16.109689    8536 command_runner.go:130] > c3c4316beca6
	I0610 12:32:16.109689    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:32:16.109689    8536 logs.go:276] 2 containers: [c3c4316beca6 c39d54960e7d]
	I0610 12:32:16.109689    8536 logs.go:123] Gathering logs for kube-scheduler [bd1a6cd98743] ...
	I0610 12:32:16.109689    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1a6cd98743"
	I0610 12:32:16.139341    8536 command_runner.go:130] ! I0610 12:07:55.711360       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:16.139341    8536 command_runner.go:130] ! W0610 12:07:57.417322       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:16.140127    8536 command_runner.go:130] ! W0610 12:07:57.417963       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.140274    8536 command_runner.go:130] ! W0610 12:07:57.418046       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:16.140274    8536 command_runner.go:130] ! W0610 12:07:57.418071       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:16.140329    8536 command_runner.go:130] ! I0610 12:07:57.459055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:16.140329    8536 command_runner.go:130] ! I0610 12:07:57.460659       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.140329    8536 command_runner.go:130] ! I0610 12:07:57.464904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:16.140397    8536 command_runner.go:130] ! I0610 12:07:57.464952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:16.140397    8536 command_runner.go:130] ! I0610 12:07:57.466483       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:16.140397    8536 command_runner.go:130] ! I0610 12:07:57.466650       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.140472    8536 command_runner.go:130] ! W0610 12:07:57.502453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:16.140535    8536 command_runner.go:130] ! E0610 12:07:57.507264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:16.140573    8536 command_runner.go:130] ! W0610 12:07:57.503672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! W0610 12:07:57.506651       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.140777    8536 command_runner.go:130] ! W0610 12:07:57.506722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! W0610 12:07:57.507113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! W0610 12:07:57.507193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.511548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.511795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.512240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.512647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.515128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.515218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.515698       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.516017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.516332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.516529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:16.141317    8536 command_runner.go:130] ! W0610 12:07:57.537276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:16.141363    8536 command_runner.go:130] ! E0610 12:07:57.537491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:16.141462    8536 command_runner.go:130] ! W0610 12:07:57.537680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:16.141462    8536 command_runner.go:130] ! E0610 12:07:57.538611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:16.141555    8536 command_runner.go:130] ! W0610 12:07:57.537622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:16.141555    8536 command_runner.go:130] ! E0610 12:07:57.538734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:16.141618    8536 command_runner.go:130] ! W0610 12:07:57.538013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:16.141681    8536 command_runner.go:130] ! E0610 12:07:57.539237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:16.141723    8536 command_runner.go:130] ! W0610 12:07:58.345815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:16.141769    8536 command_runner.go:130] ! E0610 12:07:58.345914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:16.141811    8536 command_runner.go:130] ! W0610 12:07:58.356843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:16.141811    8536 command_runner.go:130] ! E0610 12:07:58.357045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:16.141883    8536 command_runner.go:130] ! W0610 12:07:58.406587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.141883    8536 command_runner.go:130] ! E0610 12:07:58.406863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.141951    8536 command_runner.go:130] ! W0610 12:07:58.426795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142009    8536 command_runner.go:130] ! E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142038    8536 command_runner.go:130] ! W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.142038    8536 command_runner.go:130] ! E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.142098    8536 command_runner.go:130] ! W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:16.142158    8536 command_runner.go:130] ! E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:16.142192    8536 command_runner.go:130] ! W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:16.142223    8536 command_runner.go:130] ! E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:16.142283    8536 command_runner.go:130] ! W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:16.142314    8536 command_runner.go:130] ! E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:28:16.298586       1 run.go:74] "command failed" err="finished without leader elect"
	I0610 12:32:16.159373    8536 logs.go:123] Gathering logs for kube-controller-manager [3bee53d5fef9] ...
	I0610 12:32:16.159373    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bee53d5fef9"
	I0610 12:32:16.189686    8536 command_runner.go:130] ! I0610 12:30:56.976566       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:16.190065    8536 command_runner.go:130] ! I0610 12:30:58.260708       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.260892       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.266101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.267393       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.268203       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.268377       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.430160       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.430459       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.456745       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.457409       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.457489       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.457839       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.509226       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.512712       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.512947       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.517463       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.520424       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.528150       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.528371       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.528506       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.528651       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.533407       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.543133       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548293       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548310       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548473       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548492       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548660       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548672       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:16.190716    8536 command_runner.go:130] ! I0610 12:31:01.595194       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595295       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595360       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595402       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595465       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! W0610 12:31:01.595507       1 shared_informer.go:597] resyncPeriod 13h16m37.278540311s is smaller than resyncCheckPeriod 16h53m16.378760609s and the informer has already started. Changing it to 16h53m16.378760609s
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595706       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595754       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595782       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595923       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:16.191305    8536 command_runner.go:130] ! I0610 12:31:01.595956       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:16.191305    8536 command_runner.go:130] ! I0610 12:31:01.597357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:16.191360    8536 command_runner.go:130] ! I0610 12:31:01.597416       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:16.191360    8536 command_runner.go:130] ! I0610 12:31:01.597453       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:16.191360    8536 command_runner.go:130] ! I0610 12:31:01.597489       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:16.191416    8536 command_runner.go:130] ! I0610 12:31:01.597516       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:16.191416    8536 command_runner.go:130] ! I0610 12:31:01.597922       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:16.191452    8536 command_runner.go:130] ! I0610 12:31:01.597937       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:16.191452    8536 command_runner.go:130] ! I0610 12:31:01.598081       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:16.191529    8536 command_runner.go:130] ! I0610 12:31:01.614277       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:16.191567    8536 command_runner.go:130] ! I0610 12:31:01.614469       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:16.191567    8536 command_runner.go:130] ! I0610 12:31:01.614504       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:16.191607    8536 command_runner.go:130] ! I0610 12:31:01.618176       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.618586       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.618885       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.623374       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.624235       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.624265       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.629921       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.630154       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.630164       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.634130       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.634452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.634467       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639133       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639154       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639163       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639622       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639640       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.643940       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.644017       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.644031       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.652714       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.657163       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.657350       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:16.191669    8536 command_runner.go:130] ! E0610 12:31:01.664322       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.664388       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.694061       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.694262       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.694273       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.722911       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.725806       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.726026       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.734788       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.735047       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.735083       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:16.192201    8536 command_runner.go:130] ! I0610 12:31:01.759990       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:16.192201    8536 command_runner.go:130] ! I0610 12:31:01.761603       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:16.192242    8536 command_runner.go:130] ! I0610 12:31:01.761772       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:16.192242    8536 command_runner.go:130] ! I0610 12:31:01.769963       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:16.192242    8536 command_runner.go:130] ! I0610 12:31:01.773525       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:16.192309    8536 command_runner.go:130] ! I0610 12:31:01.773866       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:16.192309    8536 command_runner.go:130] ! I0610 12:31:01.773998       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:16.192342    8536 command_runner.go:130] ! I0610 12:31:01.778762       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:16.192342    8536 command_runner.go:130] ! I0610 12:31:01.778803       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:16.192342    8536 command_runner.go:130] ! I0610 12:31:01.778833       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.192342    8536 command_runner.go:130] ! I0610 12:31:01.779416       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:16.192460    8536 command_runner.go:130] ! I0610 12:31:01.779429       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:16.192460    8536 command_runner.go:130] ! I0610 12:31:01.779447       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.192460    8536 command_runner.go:130] ! I0610 12:31:01.780731       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:16.192507    8536 command_runner.go:130] ! I0610 12:31:01.782261       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:16.192507    8536 command_runner.go:130] ! I0610 12:31:01.783730       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:16.192557    8536 command_runner.go:130] ! I0610 12:31:01.782277       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.192557    8536 command_runner.go:130] ! I0610 12:31:01.782337       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:16.192639    8536 command_runner.go:130] ! I0610 12:31:01.784928       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:16.192639    8536 command_runner.go:130] ! I0610 12:31:01.782348       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.192711    8536 command_runner.go:130] ! I0610 12:31:11.813253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:16.192711    8536 command_runner.go:130] ! I0610 12:31:11.813374       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.813998       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.815397       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.818405       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.818514       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.819007       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.819350       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.821748       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.821802       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.822113       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.822204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.822232       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.826332       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.826815       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.826831       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:16.192753    8536 command_runner.go:130] ! E0610 12:31:11.830024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.830417       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.835752       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.836296       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.836330       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.839311       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.839512       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.839590       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.842028       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.842220       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.842603       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.842639       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.845940       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.846359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.846982       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.849897       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.850381       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.850613       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.853131       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:16.193275    8536 command_runner.go:130] ! I0610 12:31:11.853418       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:16.193275    8536 command_runner.go:130] ! I0610 12:31:11.853675       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:16.193336    8536 command_runner.go:130] ! I0610 12:31:11.856318       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:16.193336    8536 command_runner.go:130] ! I0610 12:31:11.856441       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:16.193336    8536 command_runner.go:130] ! I0610 12:31:11.856643       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:16.193384    8536 command_runner.go:130] ! I0610 12:31:11.856381       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:16.193451    8536 command_runner.go:130] ! I0610 12:31:11.902405       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:16.193451    8536 command_runner.go:130] ! I0610 12:31:11.903166       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:16.193451    8536 command_runner.go:130] ! I0610 12:31:11.906707       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:16.193504    8536 command_runner.go:130] ! I0610 12:31:11.907117       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:16.193504    8536 command_runner.go:130] ! I0610 12:31:11.907152       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:16.193548    8536 command_runner.go:130] ! I0610 12:31:11.910144       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:16.193548    8536 command_runner.go:130] ! I0610 12:31:11.910388       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:16.193592    8536 command_runner.go:130] ! I0610 12:31:11.910498       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:16.193592    8536 command_runner.go:130] ! I0610 12:31:11.913998       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:16.193633    8536 command_runner.go:130] ! I0610 12:31:11.914276       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:16.193633    8536 command_runner.go:130] ! I0610 12:31:11.915779       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:16.193685    8536 command_runner.go:130] ! I0610 12:31:11.916916       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:16.193685    8536 command_runner.go:130] ! I0610 12:31:11.917975       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.918292       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.930523       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.947621       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.948394       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.948768       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.954911       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.957486       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.963420       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.973610       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.979167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.980674       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.984963       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.985188       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.994612       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.003389       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.007898       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.011185       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.013303       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.014815       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.016632       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.016812       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.016947       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.017245       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.017927       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.018270       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.019668       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.019818       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.023667       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.024171       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.025888       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.026414       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.026742       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.026899       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.031613       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.035671       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.038980       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.040498       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:16.194256    8536 command_runner.go:130] ! I0610 12:31:12.044612       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:16.194256    8536 command_runner.go:130] ! I0610 12:31:12.044983       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:16.194256    8536 command_runner.go:130] ! I0610 12:31:12.048630       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:16.194256    8536 command_runner.go:130] ! I0610 12:31:12.048809       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.050934       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.051748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.77596ms"
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.058669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.911µs"
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.061957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.647762ms"
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.062771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="326.05µs"
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.074892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:16.194397    8536 command_runner.go:130] ! I0610 12:31:12.074973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:16.194397    8536 command_runner.go:130] ! I0610 12:31:12.075004       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:16.194437    8536 command_runner.go:130] ! I0610 12:31:12.075594       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:16.194437    8536 command_runner.go:130] ! I0610 12:31:12.130853       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:16.194486    8536 command_runner.go:130] ! I0610 12:31:12.140823       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:16.194486    8536 command_runner.go:130] ! I0610 12:31:12.147492       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:16.194486    8536 command_runner.go:130] ! I0610 12:31:12.174418       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:16.194526    8536 command_runner.go:130] ! I0610 12:31:12.201305       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:16.194526    8536 command_runner.go:130] ! I0610 12:31:12.218626       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:16.194569    8536 command_runner.go:130] ! I0610 12:31:12.243193       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:16.194569    8536 command_runner.go:130] ! I0610 12:31:12.658052       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:16.194609    8536 command_runner.go:130] ! I0610 12:31:12.658432       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:16.194609    8536 command_runner.go:130] ! I0610 12:31:12.674720       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:16.194678    8536 command_runner.go:130] ! I0610 12:31:42.085794       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:16.194678    8536 command_runner.go:130] ! I0610 12:32:06.626500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.481917ms"
	I0610 12:32:16.194711    8536 command_runner.go:130] ! I0610 12:32:06.626834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.891µs"
	I0610 12:32:16.194781    8536 command_runner.go:130] ! I0610 12:32:06.653330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="217.376µs"
	I0610 12:32:16.194781    8536 command_runner.go:130] ! I0610 12:32:06.704393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.856077ms"
	I0610 12:32:16.194832    8536 command_runner.go:130] ! I0610 12:32:06.705453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.995µs"
	I0610 12:32:16.213494    8536 logs.go:123] Gathering logs for coredns [24f3f7e041f9] ...
	I0610 12:32:16.213494    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f3f7e041f9"
	I0610 12:32:16.257020    8536 command_runner.go:130] > .:53
	I0610 12:32:16.257020    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:16.257020    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:16.257020    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:16.257020    8536 command_runner.go:130] > [INFO] 127.0.0.1:34387 - 41508 "HINFO IN 7171992165040069679.5605173313288368349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051230172s
	I0610 12:32:16.257020    8536 logs.go:123] Gathering logs for kindnet [c39d54960e7d] ...
	I0610 12:32:16.257020    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39d54960e7d"
	I0610 12:32:16.295302    8536 command_runner.go:130] ! I0610 12:12:45.866152       1 main.go:227] handling current node
	I0610 12:32:16.296336    8536 command_runner.go:130] ! I0610 12:12:45.866170       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.296336    8536 command_runner.go:130] ! I0610 12:12:45.866178       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297028    8536 command_runner.go:130] ! I0610 12:12:55.883210       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297171    8536 command_runner.go:130] ! I0610 12:12:55.883426       1 main.go:227] handling current node
	I0610 12:32:16.297569    8536 command_runner.go:130] ! I0610 12:12:55.883562       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:12:55.883686       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:05.893577       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:05.893734       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:05.893787       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:05.893797       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:15.902454       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:15.902590       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:15.902606       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:15.902614       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:25.917172       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:25.917277       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:25.917297       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:25.917305       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:35.933505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:35.933609       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:35.933623       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:35.933630       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:45.943963       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:45.944071       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:45.944089       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:45.944114       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:55.953212       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:55.953354       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:55.953371       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:55.953380       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:05.959968       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:05.960014       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:05.960029       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:05.960036       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:15.970279       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:15.970375       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:15.970391       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:15.970399       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298490    8536 command_runner.go:130] ! I0610 12:14:25.977769       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298490    8536 command_runner.go:130] ! I0610 12:14:25.977865       1 main.go:227] handling current node
	I0610 12:32:16.298490    8536 command_runner.go:130] ! I0610 12:14:25.977880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298490    8536 command_runner.go:130] ! I0610 12:14:25.977886       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:35.984527       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:35.984582       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:35.984596       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:35.984604       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:46.000499       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:46.000612       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:46.000635       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:46.000650       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:56.007468       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:56.007626       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:56.007642       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:56.007651       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:06.022181       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:06.022286       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:06.022302       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:06.022312       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:16.038901       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:16.038992       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:16.039008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:16.039016       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:26.062184       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:26.062279       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:26.062296       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:26.062304       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:36.071408       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:36.071540       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:36.071556       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:36.071564       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:46.078051       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:46.078158       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:46.078176       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:46.078184       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:56.086545       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:56.086647       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:56.086663       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:56.086671       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:16:06.094871       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299195    8536 command_runner.go:130] ! I0610 12:16:06.094920       1 main.go:227] handling current node
	I0610 12:32:16.299238    8536 command_runner.go:130] ! I0610 12:16:06.094935       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:06.094958       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:16.109713       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:16.110282       1 main.go:227] handling current node
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:16.110679       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:16.110879       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:26.124392       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:26.124492       1 main.go:227] handling current node
	I0610 12:32:16.299920    8536 command_runner.go:130] ! I0610 12:16:26.124507       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:26.124514       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:36.130696       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:36.130864       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:36.130880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:36.130888       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:46.145505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:46.145897       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:46.146067       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:46.146083       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:56.160466       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:56.160571       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:56.160586       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:56.160594       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:06.173930       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:06.173977       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:06.173992       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:06.173999       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:16.180797       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:16.180971       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:16.181005       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:16.181031       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:26.197081       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:26.197184       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:26.197201       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:26.197210       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:36.204586       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:36.204700       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:36.204716       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:36.204725       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:46.214904       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:46.215024       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:46.215040       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:46.215048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:56.228072       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:56.228173       1 main.go:227] handling current node
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:17:56.228189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:17:56.228197       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:18:06.237192       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:18:06.237303       1 main.go:227] handling current node
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:18:06.237329       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:06.237354       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:16.244574       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:16.244799       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:16.244837       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:16.244863       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:26.258608       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:26.258654       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:26.258669       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:26.258676       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:36.264620       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:36.264824       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:36.264841       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:36.264850       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:46.275317       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:46.275426       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:46.275460       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:46.275469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:56.290965       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:56.291027       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:56.291041       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:56.291048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:06.298370       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:06.298512       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:06.298529       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:06.298537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:16.309110       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:16.309215       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:16.309232       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:16.309240       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:26.322583       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:26.322633       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:26.322647       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:26.322654       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:36.336250       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:36.336376       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:36.336392       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:36.336400       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:46.350996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301379    8536 command_runner.go:130] ! I0610 12:19:46.351137       1 main.go:227] handling current node
	I0610 12:32:16.301421    8536 command_runner.go:130] ! I0610 12:19:46.351155       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:46.351164       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:56.356996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:56.357039       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:56.357052       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:56.357059       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:06.372114       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:06.372883       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:06.373032       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:06.373062       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:16.381023       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:16.381690       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:16.381940       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:16.381975       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:26.389178       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:26.389224       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:26.389240       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:26.389247       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:36.395687       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:36.395828       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:36.395844       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:36.395851       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:46.410656       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:46.410865       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:46.410882       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:46.410891       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:56.425296       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:56.425540       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:56.425625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:56.425639       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:06.439346       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:06.439393       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:06.439406       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:06.439413       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:16.450424       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:16.450594       1 main.go:227] handling current node
	I0610 12:32:16.302071    8536 command_runner.go:130] ! I0610 12:21:16.450628       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:16.450821       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:26.458379       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:26.458487       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:26.458503       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:26.458511       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:36.474243       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:36.474337       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:36.474354       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:36.474362       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:46.486635       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:46.486679       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:46.486693       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:46.486700       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:56.502256       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:56.502361       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:56.502377       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:56.502386       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:06.508796       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:06.508911       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:06.508928       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:06.508957       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:16.523863       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:16.523952       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:16.523970       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:16.523979       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:26.531516       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:26.531621       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:26.531637       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:26.531645       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:36.546403       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302889    8536 command_runner.go:130] ! I0610 12:22:36.546510       1 main.go:227] handling current node
	I0610 12:32:16.302889    8536 command_runner.go:130] ! I0610 12:22:36.546525       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302889    8536 command_runner.go:130] ! I0610 12:22:36.546533       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303110    8536 command_runner.go:130] ! I0610 12:22:46.603429       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:46.603565       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:46.603581       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:46.603590       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:56.619134       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:56.619253       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:56.619287       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:56.619296       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:06.634307       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:06.634399       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:06.634415       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:06.634424       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:16.649341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:16.649508       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:16.649527       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:16.649539       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:26.662421       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:26.662451       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:26.662462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:26.662468       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:36.669686       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:36.669734       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:36.669822       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:36.669831       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:46.678078       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:46.678194       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:46.678209       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:46.678217       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:56.685841       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:56.685884       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:56.685898       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:56.685905       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:06.692341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:06.692609       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:06.692699       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:06.692856       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:16.700494       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303826    8536 command_runner.go:130] ! I0610 12:24:16.700609       1 main.go:227] handling current node
	I0610 12:32:16.303871    8536 command_runner.go:130] ! I0610 12:24:16.700625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:16.700633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:26.716495       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:26.716609       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:26.716625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:26.716633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:36.723606       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:36.723716       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:36.723733       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:36.724254       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:46.739916       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:46.740008       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:46.740402       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:46.740432       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:56.759676       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:56.760848       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:56.760902       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:56.760914       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:06.771450       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:06.771514       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:06.771530       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:06.771537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:16.778338       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:16.778445       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:16.778461       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:16.778469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:26.791778       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:26.791933       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:26.791950       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:26.791974       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:36.800633       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:36.800842       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:36.800860       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:36.800869       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304679    8536 command_runner.go:130] ! I0610 12:25:46.815290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304679    8536 command_runner.go:130] ! I0610 12:25:46.815339       1 main.go:227] handling current node
	I0610 12:32:16.304781    8536 command_runner.go:130] ! I0610 12:25:46.815355       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304863    8536 command_runner.go:130] ! I0610 12:25:46.815363       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.830374       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.830439       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.830471       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.830478       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.831222       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.831411       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.831494       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.840295       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.840446       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.840464       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.840913       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.845129       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.845329       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.860365       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.860476       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.860493       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.860502       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.861223       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.861379       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.873719       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.873964       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.874016       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.874181       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.874413       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.874451       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881254       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881366       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881382       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881407       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881814       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881908       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900700       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900797       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900815       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900823       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900985       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907395       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907412       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907420       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907548       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907656       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922305       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922349       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922361       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922367       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922490       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922515       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.929579       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.929687       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.929704       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.929712       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.930550       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.930641       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.944603       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.944719       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.944772       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.945138       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.945535       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.945625       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955188       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955329       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955581       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.956158       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.965590       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.965717       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.965826       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.965836       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.966598       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.966708       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:56.999276       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:56.999553       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:56.999711       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:56.999728       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:57.000088       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:57.000177       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015069       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015281       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015300       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015308       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015707       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015928       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.324960    8536 logs.go:123] Gathering logs for kube-proxy [afad8b05897e] ...
	I0610 12:32:16.324960    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 afad8b05897e"
	I0610 12:32:16.355539    8536 command_runner.go:130] ! I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:16.355728    8536 command_runner.go:130] ! I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:32:16.355728    8536 command_runner.go:130] ! I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:16.355767    8536 command_runner.go:130] ! I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:16.355767    8536 command_runner.go:130] ! I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:16.355813    8536 command_runner.go:130] ! I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:16.355813    8536 command_runner.go:130] ! I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:16.355856    8536 command_runner.go:130] ! I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.355856    8536 command_runner.go:130] ! I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:32:16.355892    8536 command_runner.go:130] ! I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:16.355892    8536 command_runner.go:130] ! I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:16.355957    8536 command_runner.go:130] ! I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:16.356007    8536 command_runner.go:130] ! I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:32:16.356007    8536 command_runner.go:130] ! I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:16.356039    8536 command_runner.go:130] ! I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:16.356039    8536 command_runner.go:130] ! I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:16.356039    8536 command_runner.go:130] ! I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:16.359341    8536 logs.go:123] Gathering logs for kube-apiserver [d7941126134f] ...
	I0610 12:32:16.359514    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7941126134f"
	I0610 12:32:16.384533    8536 command_runner.go:130] ! I0610 12:30:56.783636       1 options.go:221] external host was not specified, using 172.17.150.144
	I0610 12:32:16.384533    8536 command_runner.go:130] ! I0610 12:30:56.802716       1 server.go:148] Version: v1.30.1
	I0610 12:32:16.384533    8536 command_runner.go:130] ! I0610 12:30:56.802771       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.385574    8536 command_runner.go:130] ! I0610 12:30:57.206580       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 12:32:16.385574    8536 command_runner.go:130] ! I0610 12:30:57.224598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:57.225809       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:57.226002       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:57.226365       1 instance.go:299] Using reconciler: lease
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:57.637999       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:57.638403       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.007103       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.008169       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.357732       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.553660       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.567826       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.567936       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.567947       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.569137       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.569236       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.570636       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.572063       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.572082       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.572088       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.575154       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.575194       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.576862       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.576966       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.576976       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.577920       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.578059       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.578305       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.579295       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.581807       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.581943       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.582127       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.583254       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.583359       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.583370       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! I0610 12:30:58.594003       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.594046       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! I0610 12:30:58.597008       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.597028       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.597047       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! I0610 12:30:58.597658       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.597679       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387194    8536 command_runner.go:130] ! W0610 12:30:58.597686       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387194    8536 command_runner.go:130] ! I0610 12:30:58.602889       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0610 12:32:16.387194    8536 command_runner.go:130] ! W0610 12:30:58.602907       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387194    8536 command_runner.go:130] ! W0610 12:30:58.602913       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387256    8536 command_runner.go:130] ! I0610 12:30:58.608646       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0610 12:32:16.387256    8536 command_runner.go:130] ! I0610 12:30:58.610262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0610 12:32:16.387297    8536 command_runner.go:130] ! W0610 12:30:58.610275       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0610 12:32:16.387297    8536 command_runner.go:130] ! W0610 12:30:58.610281       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387297    8536 command_runner.go:130] ! I0610 12:30:58.619816       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0610 12:32:16.387297    8536 command_runner.go:130] ! W0610 12:30:58.619856       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0610 12:32:16.387297    8536 command_runner.go:130] ! W0610 12:30:58.619866       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0610 12:32:16.387392    8536 command_runner.go:130] ! I0610 12:30:58.627044       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0610 12:32:16.387392    8536 command_runner.go:130] ! W0610 12:30:58.627092       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387392    8536 command_runner.go:130] ! W0610 12:30:58.627296       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387438    8536 command_runner.go:130] ! I0610 12:30:58.629017       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0610 12:32:16.387438    8536 command_runner.go:130] ! W0610 12:30:58.629067       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387438    8536 command_runner.go:130] ! I0610 12:30:58.659122       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0610 12:32:16.387438    8536 command_runner.go:130] ! W0610 12:30:58.659244       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387438    8536 command_runner.go:130] ! I0610 12:30:59.341469       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:16.387522    8536 command_runner.go:130] ! I0610 12:30:59.341814       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:16.387522    8536 command_runner.go:130] ! I0610 12:30:59.341806       1 secure_serving.go:213] Serving securely on [::]:8443
	I0610 12:32:16.387522    8536 command_runner.go:130] ! I0610 12:30:59.342486       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0610 12:32:16.387604    8536 command_runner.go:130] ! I0610 12:30:59.342867       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0610 12:32:16.387604    8536 command_runner.go:130] ! I0610 12:30:59.342901       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0610 12:32:16.387604    8536 command_runner.go:130] ! I0610 12:30:59.342987       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.341865       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.344865       1 controller.go:116] Starting legacy_token_tracking_controller
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.344899       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.346737       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.346910       1 available_controller.go:423] Starting AvailableConditionController
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.346960       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.347078       1 aggregator.go:163] waiting for initial CRD sync...
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.347170       1 controller.go:78] Starting OpenAPI AggregationController
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.347256       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.347656       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0610 12:32:16.387895    8536 command_runner.go:130] ! I0610 12:30:59.347947       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0610 12:32:16.387895    8536 command_runner.go:130] ! I0610 12:30:59.348233       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.348295       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.341877       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.377996       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.378109       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.378362       1 controller.go:139] Starting OpenAPI controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.378742       1 controller.go:87] Starting OpenAPI V3 controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.378883       1 naming_controller.go:291] Starting NamingConditionController
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379043       1 establishing_controller.go:76] Starting EstablishingController
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379247       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379438       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379518       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379777       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379999       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.524664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.525326       1 policy_source.go:224] refreshing policies
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.543486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.547084       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.548579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.549972       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.550011       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.551151       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.554229       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.560228       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578343       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578414       1 aggregator.go:165] initial CRD sync complete...
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578429       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578466       1 cache.go:39] Caches are synced for autoregister controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.606740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:31:00.360768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 12:32:16.387958    8536 command_runner.go:130] ! W0610 12:31:00.893787       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.150.144]
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:31:00.913283       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:00.933946       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.471259       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.690867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.714405       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.840117       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.856715       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 12:32:16.396774    8536 logs.go:123] Gathering logs for kube-scheduler [d90e72ef4670] ...
	I0610 12:32:16.396774    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90e72ef4670"
	I0610 12:32:16.422903    8536 command_runner.go:130] ! I0610 12:30:56.811878       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:16.422903    8536 command_runner.go:130] ! W0610 12:30:59.481898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:16.423247    8536 command_runner.go:130] ! W0610 12:30:59.482123       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.423345    8536 command_runner.go:130] ! W0610 12:30:59.482217       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:16.423417    8536 command_runner.go:130] ! W0610 12:30:59.482255       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:16.423491    8536 command_runner.go:130] ! I0610 12:30:59.514164       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:16.423517    8536 command_runner.go:130] ! I0610 12:30:59.514266       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.423582    8536 command_runner.go:130] ! I0610 12:30:59.518405       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:16.423600    8536 command_runner.go:130] ! I0610 12:30:59.518496       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:16.423600    8536 command_runner.go:130] ! I0610 12:30:59.518958       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:16.423668    8536 command_runner.go:130] ! I0610 12:30:59.519337       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.423720    8536 command_runner.go:130] ! I0610 12:30:59.619122       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:16.425942    8536 logs.go:123] Gathering logs for kube-proxy [1de5fa0ef838] ...
	I0610 12:32:16.425942    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1de5fa0ef838"
	I0610 12:32:16.452339    8536 command_runner.go:130] ! I0610 12:31:02.254962       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:16.453007    8536 command_runner.go:130] ! I0610 12:31:02.294630       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.150.144"]
	I0610 12:32:16.453007    8536 command_runner.go:130] ! I0610 12:31:02.403290       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:16.453041    8536 command_runner.go:130] ! I0610 12:31:02.403338       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:16.453041    8536 command_runner.go:130] ! I0610 12:31:02.403357       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.416009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.416300       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.416345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.424657       1 config.go:192] "Starting service config controller"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.425325       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.425369       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.425382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.432037       1 config.go:319] "Starting node config controller"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.432075       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.535663       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.535744       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.535786       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:16.456220    8536 logs.go:123] Gathering logs for kindnet [c3c4316beca6] ...
	I0610 12:32:16.456220    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c4316beca6"
	I0610 12:32:16.485136    8536 command_runner.go:130] ! I0610 12:31:02.264969       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 12:32:16.485448    8536 command_runner.go:130] ! I0610 12:31:02.265572       1 main.go:107] hostIP = 172.17.150.144
	I0610 12:32:16.485506    8536 command_runner.go:130] ! podIP = 172.17.150.144
	I0610 12:32:16.485506    8536 command_runner.go:130] ! I0610 12:31:02.265708       1 main.go:116] setting mtu 1500 for CNI 
	I0610 12:32:16.485506    8536 command_runner.go:130] ! I0610 12:31:02.265761       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 12:32:16.485506    8536 command_runner.go:130] ! I0610 12:31:02.265778       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 12:32:16.485584    8536 command_runner.go:130] ! I0610 12:31:32.684223       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0610 12:32:16.485621    8536 command_runner.go:130] ! I0610 12:31:32.703397       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.485658    8536 command_runner.go:130] ! I0610 12:31:32.703595       1 main.go:227] handling current node
	I0610 12:32:16.485658    8536 command_runner.go:130] ! I0610 12:31:32.742189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.485658    8536 command_runner.go:130] ! I0610 12:31:32.742230       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:32.742783       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.151.128 Flags: [] Table: 0} 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:32.743097       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:32.743120       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:32.743193       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750326       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750472       1 main.go:227] handling current node
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750487       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750494       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750648       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750678       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:52.767023       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:52.767174       1 main.go:227] handling current node
	I0610 12:32:16.485975    8536 command_runner.go:130] ! I0610 12:31:52.767191       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.486098    8536 command_runner.go:130] ! I0610 12:31:52.767199       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.486098    8536 command_runner.go:130] ! I0610 12:31:52.767842       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.486098    8536 command_runner.go:130] ! I0610 12:31:52.767929       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.486146    8536 command_runner.go:130] ! I0610 12:32:02.782886       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.486146    8536 command_runner.go:130] ! I0610 12:32:02.782992       1 main.go:227] handling current node
	I0610 12:32:16.486146    8536 command_runner.go:130] ! I0610 12:32:02.783008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:02.783073       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:02.783951       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:02.784044       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:12.799859       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:12.799956       1 main.go:227] handling current node
	I0610 12:32:16.486445    8536 command_runner.go:130] ! I0610 12:32:12.799981       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.486445    8536 command_runner.go:130] ! I0610 12:32:12.799989       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.486445    8536 command_runner.go:130] ! I0610 12:32:12.800455       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.486494    8536 command_runner.go:130] ! I0610 12:32:12.800616       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.491125    8536 logs.go:123] Gathering logs for Docker ...
	I0610 12:32:16.491205    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:16.525155    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:16.525155    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0610 12:32:16.525334    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.525379    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0610 12:32:16.525379    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:16.525435    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.525497    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:16.525556    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.665734294Z" level=info msg="Starting up"
	I0610 12:32:16.525556    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.666799026Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:16.525623    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.668025832Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0610 12:32:16.525807    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.707077561Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:16.525871    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745342414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:16.526003    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745425201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:16.526072    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745528085Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:16.526136    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745580077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526136    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746319960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.526202    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746463837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746722696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746775088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746796184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746813182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.747203320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526809    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.748049086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526856    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752393000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.526856    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752519780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.527009    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752692453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.527062    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752790737Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:16.527062    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753305956Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:16.527103    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753420338Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:16.527212    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753439135Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:16.527253    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759080243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:16.527292    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759316106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:16.527383    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759347801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:16.527425    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759374497Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:16.527490    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759392594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:16.527490    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759476281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:16.527546    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759928509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:16.527546    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760128877Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:16.527610    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760824467Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:16.527687    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760850663Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:16.527752    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760867361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527752    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760883758Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527810    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760898556Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527810    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760914553Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527864    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760935350Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527864    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760951047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527922    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760966645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527984    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760986442Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.528044    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761064230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528044    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761105323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528044    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761128319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528126    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761143417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528126    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761158215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528187    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761173012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528187    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761187310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528252    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761210007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528252    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761455768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528316    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761477764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528373    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761493962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528373    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761507660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528491    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761522057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528491    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761538755Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:16.528557    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761561351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528619    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761583448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528619    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761598445Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:16.528675    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761652437Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:16.528675    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761676833Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:16.528729    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761691230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:16.528867    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761709928Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:16.528927    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761721526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.529022    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761735324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:16.529075    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761752021Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:16.529114    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762164056Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:16.529148    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762290536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:16.529148    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762532698Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:16.529236    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762557794Z" level=info msg="containerd successfully booted in 0.059804s"
	I0610 12:32:16.529236    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.723660372Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:16.529267    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.979070633Z" level=info msg="Loading containers: start."
	I0610 12:32:16.529344    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.430556665Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:16.529379    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.525359393Z" level=info msg="Loading containers: done."
	I0610 12:32:16.529480    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.555368825Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:16.529480    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.556499190Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:16.529480    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614621979Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:16.529558    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614710469Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:16.529558    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:16.529617    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.105858304Z" level=info msg="Processing signal 'terminated'"
	I0610 12:32:16.529617    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.107858244Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0610 12:32:16.529683    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 systemd[1]: Stopping Docker Application Container Engine...
	I0610 12:32:16.529740    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109274172Z" level=info msg="Daemon shutdown complete"
	I0610 12:32:16.529740    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109439076Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0610 12:32:16.529865    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109591179Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0610 12:32:16.529899    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: docker.service: Deactivated successfully.
	I0610 12:32:16.529930    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Stopped Docker Application Container Engine.
	I0610 12:32:16.529966    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:16.530040    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.200932485Z" level=info msg="Starting up"
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.202989526Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.204789062Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.250167169Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291799101Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291856902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291930003Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291948904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291983304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291997405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292182308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292287811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292310511Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292322911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292350212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292701119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.295953884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.530615    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296063086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530660    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296411793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.530729    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296455694Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:16.530848    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296587396Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:16.530894    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296721299Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:16.530997    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296741600Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:16.531171    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296941504Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:16.531254    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297027105Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:16.531297    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297046206Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:16.531368    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297078906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:16.531443    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297254610Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:16.531443    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297334111Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:16.531525    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297955024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:16.531586    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298031825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:16.531586    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298071126Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:16.531586    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298090126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:16.531664    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298105527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531724    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298120527Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531724    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298155728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531724    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298172828Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531724    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298189828Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531822    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298204229Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531822    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298218329Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531822    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298230929Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531940    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298260030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532031    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298281530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298296531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298318131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298333531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298494735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298514735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298529635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298592837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298610037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298624437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298639137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298652438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298669738Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298693539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298708139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298720839Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298773440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298792441Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298805041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298820841Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298832741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298850742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298862942Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299109447Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299202249Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299272150Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299312051Z" level=info msg="containerd successfully booted in 0.052836s"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.253253712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.287070988Z" level=info msg="Loading containers: start."
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.612574192Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.704084520Z" level=info msg="Loading containers: done."
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733112200Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733256003Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.788468006Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.790252742Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Loaded network plugin cni"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start cri-dockerd grpc backend"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-kbhvv_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c\""
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-z28tq_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d\""
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013449453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013587556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013608856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013775860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.087769538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089579074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089879880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.090133785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183156944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183215145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183227346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183318447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533506    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f56cc8af37db0f3fea8de363d927c6924c7ad7e81f4908f6f5c87d6c0db17a61/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.533561    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244245765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.533619    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244411968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.533619    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244427968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533671    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244593672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533873    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8902dac03acbce14b7e106bff482e591dd574972082943e9adda30969716a707/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.534025    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b13c0058ce265f3c4b18ec59cbb42b72803807a8d96330756114b2526fffa2de/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.534025    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.534131    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611175897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534168    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611296299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534168    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611337700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534227    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.612109315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730665784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730725385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730738886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730907689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848373736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848822145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851216993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851612501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900274973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900404876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900419576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900508378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:59Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830014876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830867993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831086098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831510106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854754571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854918174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.857723530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.858668949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.877394923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534863    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878360042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534863    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878507645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534863    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.879086357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534863    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d997d7c306c2a08fab9e0e53bd14a9da495d8b0abdad38c9935489b788eccd/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.535008    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2dd9b423841c9fee92dc2a884fe8f45fb9dd5b8713214ce8804ac8ced10629d1/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.535008    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337790622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535080    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337963526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535142    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337992226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535142    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.338102629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535200    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.394005846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535200    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396505296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535265    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396667999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535265    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396999105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535328    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c19b39e15f6ae82627ffedaf799ef63dd09554d65260dbfc8856b08a4ce7354/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.535389    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.711733694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535389    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712144402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535443    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712256705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535496    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712964519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535496    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.980963328Z" level=info msg="shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:16.535552    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981035932Z" level=warning msg="cleaning up after shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:16.535605    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981047633Z" level=info msg="cleaning up dead shim" namespace=moby
	I0610 12:32:16.535605    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1052]: time="2024-06-10T12:31:31.981399154Z" level=info msg="ignoring event" container=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0610 12:32:16.535669    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.486941957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535721    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487165464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535778    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487187665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.488142597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345354892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345592698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345620799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345913706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.511059667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512286197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512437501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512775109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241c4811748facbb85003522d513039c3dfc5b38006b7f1cba90a5e411055e97/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4d124cebb3b3affe7ace090f1a152544207db26621b5b4098cad87e3db47a4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955148547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955266050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955283650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955812861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444723816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.536371    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444892597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.536371    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444914895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.536542    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.445846695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.570404    8536 logs.go:123] Gathering logs for describe nodes ...
	I0610 12:32:16.570404    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 12:32:16.791070    8536 command_runner.go:130] > Name:               multinode-813300
	I0610 12:32:16.791070    8536 command_runner.go:130] > Roles:              control-plane
	I0610 12:32:16.791070    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:16.791732    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:16.791732    8536 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0610 12:32:16.791732    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	I0610 12:32:16.791792    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:16.791792    8536 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0610 12:32:16.791792    8536 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0610 12:32:16.791792    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:16.791792    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:16.791852    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:16.791852    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	I0610 12:32:16.791852    8536 command_runner.go:130] > Taints:             <none>
	I0610 12:32:16.791852    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:16.791928    8536 command_runner.go:130] > Lease:
	I0610 12:32:16.791928    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300
	I0610 12:32:16.791928    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:16.791928    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:32:10 +0000
	I0610 12:32:16.791982    8536 command_runner.go:130] > Conditions:
	I0610 12:32:16.791982    8536 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0610 12:32:16.791982    8536 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0610 12:32:16.791982    8536 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0610 12:32:16.792049    8536 command_runner.go:130] >   DiskPressure     False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0610 12:32:16.792049    8536 command_runner.go:130] >   PIDPressure      False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0610 12:32:16.792375    8536 command_runner.go:130] >   Ready            True    Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:31:40 +0000   KubeletReady                 kubelet is posting ready status
	I0610 12:32:16.792444    8536 command_runner.go:130] > Addresses:
	I0610 12:32:16.792444    8536 command_runner.go:130] >   InternalIP:  172.17.150.144
	I0610 12:32:16.792444    8536 command_runner.go:130] >   Hostname:    multinode-813300
	I0610 12:32:16.792444    8536 command_runner.go:130] > Capacity:
	I0610 12:32:16.792444    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.792502    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.792551    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.792551    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.792551    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.792551    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:16.792551    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.792633    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.792633    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.792633    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.792633    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.792633    8536 command_runner.go:130] > System Info:
	I0610 12:32:16.792633    8536 command_runner.go:130] >   Machine ID:                 8363a852b0fa420a8dccb009e6f4f9c7
	I0610 12:32:16.792633    8536 command_runner.go:130] >   System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Boot ID:                    a60b688f-6b78-4fa5-b21e-96a64e5c1047
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:16.792752    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:16.792831    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:16.792831    8536 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0610 12:32:16.792831    8536 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0610 12:32:16.792831    8536 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0610 12:32:16.792831    8536 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:16.792899    8536 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0610 12:32:16.792899    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-z28tq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0610 12:32:16.792899    8536 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0610 12:32:16.792958    8536 command_runner.go:130] >   kube-system                 etcd-multinode-813300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0610 12:32:16.792958    8536 command_runner.go:130] >   kube-system                 kindnet-29gbv                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0610 12:32:16.792958    8536 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-813300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	I0610 12:32:16.793049    8536 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-813300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0610 12:32:16.793087    8536 command_runner.go:130] >   kube-system                 kube-proxy-nrpvt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0610 12:32:16.793087    8536 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-813300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0610 12:32:16.793087    8536 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0610 12:32:16.793087    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:16.793087    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:16.793203    8536 command_runner.go:130] >   Resource           Requests     Limits
	I0610 12:32:16.793203    8536 command_runner.go:130] >   --------           --------     ------
	I0610 12:32:16.793248    8536 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0610 12:32:16.793248    8536 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0610 12:32:16.793248    8536 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0610 12:32:16.793248    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0610 12:32:16.793248    8536 command_runner.go:130] > Events:
	I0610 12:32:16.793248    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:16.793312    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:16.793312    8536 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0610 12:32:16.793312    8536 command_runner.go:130] >   Normal  Starting                 74s                kube-proxy       
	I0610 12:32:16.793352    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:16.793352    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:16.793400    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:16.793400    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:16.793400    8536 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0610 12:32:16.793400    8536 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:16.793459    8536 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-813300 status is now: NodeReady
	I0610 12:32:16.793459    8536 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0610 12:32:16.793459    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:16.793459    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:16.793550    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:16.793550    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:16.793550    8536 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:16.793608    8536 command_runner.go:130] > Name:               multinode-813300-m02
	I0610 12:32:16.793608    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:16.793608    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:16.793608    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:16.793608    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:16.793608    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m02
	I0610 12:32:16.793608    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:16.793683    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:16.793746    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:16.793746    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:16.793746    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	I0610 12:32:16.793746    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:16.793746    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:16.793746    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:16.793809    8536 command_runner.go:130] > Lease:
	I0610 12:32:16.793809    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m02
	I0610 12:32:16.793809    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:16.793809    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:30 +0000
	I0610 12:32:16.793809    8536 command_runner.go:130] > Conditions:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:16.794015    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:16.794015    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.794015    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.794015    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.794015    8536 command_runner.go:130] > Addresses:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   InternalIP:  172.17.151.128
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Hostname:    multinode-813300-m02
	I0610 12:32:16.794015    8536 command_runner.go:130] > Capacity:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.794015    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.794015    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.794015    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.794015    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.794015    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.794015    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.794015    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.794015    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.794015    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.794015    8536 command_runner.go:130] > System Info:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	I0610 12:32:16.794015    8536 command_runner.go:130] >   System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:16.794015    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:16.794015    8536 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0610 12:32:16.794015    8536 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0610 12:32:16.794015    8536 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:16.794015    8536 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0610 12:32:16.794015    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-czxmt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0610 12:32:16.794015    8536 command_runner.go:130] >   kube-system                 kindnet-r4nfq              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0610 12:32:16.794015    8536 command_runner.go:130] >   kube-system                 kube-proxy-rx2b2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0610 12:32:16.794015    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:16.794574    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:16.794574    8536 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0610 12:32:16.794574    8536 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0610 12:32:16.794574    8536 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0610 12:32:16.794637    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0610 12:32:16.794637    8536 command_runner.go:130] > Events:
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:16.794637    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:16.794750    8536 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-813300-m02 status is now: NodeReady
	I0610 12:32:16.794750    8536 command_runner.go:130] >   Normal  NodeNotReady             4m1s               node-controller  Node multinode-813300-m02 status is now: NodeNotReady
	I0610 12:32:16.794750    8536 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:16.794750    8536 command_runner.go:130] > Name:               multinode-813300-m03
	I0610 12:32:16.794750    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:16.794836    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m03
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:16.794971    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:16.794971    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_25_53_0700
	I0610 12:32:16.794971    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:16.794971    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:16.794971    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:16.795034    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:16.795034    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:25:52 +0000
	I0610 12:32:16.795034    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:16.795034    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:16.795034    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:16.795034    8536 command_runner.go:130] > Lease:
	I0610 12:32:16.795034    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m03
	I0610 12:32:16.795093    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:16.795093    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:04 +0000
	I0610 12:32:16.795093    8536 command_runner.go:130] > Conditions:
	I0610 12:32:16.795093    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:16.795169    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:16.795169    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.795418    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.795418    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.795418    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.795485    8536 command_runner.go:130] > Addresses:
	I0610 12:32:16.795485    8536 command_runner.go:130] >   InternalIP:  172.17.144.46
	I0610 12:32:16.795485    8536 command_runner.go:130] >   Hostname:    multinode-813300-m03
	I0610 12:32:16.795485    8536 command_runner.go:130] > Capacity:
	I0610 12:32:16.795485    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.795485    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.795554    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.795554    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.795554    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.795554    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:16.795554    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.795554    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.795554    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.795612    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.795612    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.795612    8536 command_runner.go:130] > System Info:
	I0610 12:32:16.795612    8536 command_runner.go:130] >   Machine ID:                 2d60e1f6e3b2454db505a650eae61212
	I0610 12:32:16.795612    8536 command_runner.go:130] >   System UUID:                b38b4a9a-39f6-6f43-9e6d-19433dc62cd9
	I0610 12:32:16.795676    8536 command_runner.go:130] >   Boot ID:                    0a419483-5289-4d17-96c2-fd4487360412
	I0610 12:32:16.795676    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:16.795676    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:16.795676    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:16.795676    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:16.795736    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:16.795736    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:16.795736    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:16.795736    8536 command_runner.go:130] > PodCIDR:                      10.244.2.0/24
	I0610 12:32:16.795736    8536 command_runner.go:130] > PodCIDRs:                     10.244.2.0/24
	I0610 12:32:16.795799    8536 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0610 12:32:16.795799    8536 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:16.795799    8536 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0610 12:32:16.795799    8536 command_runner.go:130] >   kube-system                 kindnet-2pc4j       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m24s
	I0610 12:32:16.795856    8536 command_runner.go:130] >   kube-system                 kube-proxy-vw56h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	I0610 12:32:16.795856    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:16.795856    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:16.795856    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:16.795856    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:16.795856    8536 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0610 12:32:16.795920    8536 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0610 12:32:16.795920    8536 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0610 12:32:16.795920    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0610 12:32:16.795920    8536 command_runner.go:130] > Events:
	I0610 12:32:16.795920    8536 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0610 12:32:16.795979    8536 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0610 12:32:16.795979    8536 command_runner.go:130] >   Normal  Starting                 6m11s                  kube-proxy       
	I0610 12:32:16.796063    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m24s (x2 over 6m24s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientMemory
	I0610 12:32:16.796088    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m24s (x2 over 6m24s)  kubelet          Node multinode-813300-m03 status is now: NodeHasNoDiskPressure
	I0610 12:32:16.796129    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m24s (x2 over 6m24s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientPID
	I0610 12:32:16.796129    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:16.796129    8536 command_runner.go:130] >   Normal  RegisteredNode           6m22s                  node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:16.796190    8536 command_runner.go:130] >   Normal  NodeReady                6m3s                   kubelet          Node multinode-813300-m03 status is now: NodeReady
	I0610 12:32:16.796190    8536 command_runner.go:130] >   Normal  NodeNotReady             4m32s                  node-controller  Node multinode-813300-m03 status is now: NodeNotReady
	I0610 12:32:16.796190    8536 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:16.807247    8536 logs.go:123] Gathering logs for dmesg ...
	I0610 12:32:16.807247    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 12:32:16.833337    8536 command_runner.go:130] > [Jun10 12:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.132459] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.024371] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.082449] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.022513] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0610 12:32:16.833337    8536 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +5.764981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +1.334692] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +1.227872] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +7.275008] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0610 12:32:16.833337    8536 command_runner.go:130] > [Jun10 12:30] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.213819] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [ +29.247267] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.109477] kauditd_printk_skb: 73 callbacks suppressed
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.638576] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.214581] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.255487] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +3.027967] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.239865] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.216732] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0610 12:32:16.833895    8536 command_runner.go:130] > [  +0.314976] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0610 12:32:16.833895    8536 command_runner.go:130] > [  +0.112938] kauditd_printk_skb: 183 callbacks suppressed
	I0610 12:32:16.833895    8536 command_runner.go:130] > [  +0.871081] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +5.053506] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +0.123809] kauditd_printk_skb: 34 callbacks suppressed
	I0610 12:32:16.833945    8536 command_runner.go:130] > [Jun10 12:31] kauditd_printk_skb: 62 callbacks suppressed
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +3.513215] hrtimer: interrupt took 368589 ns
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +0.107277] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +7.541664] kauditd_printk_skb: 70 callbacks suppressed
	I0610 12:32:16.836396    8536 logs.go:123] Gathering logs for etcd [877ee07c1499] ...
	I0610 12:32:16.836431    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877ee07c1499"
	I0610 12:32:16.866563    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.207374Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208407Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.150.144:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.150.144:2380","--initial-cluster=multinode-813300=https://172.17.150.144:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.150.144:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.150.144:2380","--name=multinode-813300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208499Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.208577Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208593Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.150.144:2380"]}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208715Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.218326Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"]}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.22047Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-813300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"ini
tial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.244201Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.944438ms"}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.274404Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.303075Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","commit-index":1913}
	I0610 12:32:16.867380    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=()"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became follower at term 2"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8f4442f54c46fb8d [peers: [], term: 2, commit: 1913, applied: 0, lastindex: 1913, lastterm: 2]"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.318917Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.323726Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1273}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.328272Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1642}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.335671Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.347777Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8f4442f54c46fb8d","timeout":"7s"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.349755Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8f4442f54c46fb8d"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.350228Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8f4442f54c46fb8d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.352715Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.36067Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361057Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361302Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=(10323449867154160525)"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363612Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","added-peer-id":"8f4442f54c46fb8d","added-peer-peer-urls":["https://172.17.159.171:2380"]}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364067Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.367772Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.373962Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.150.144:2380"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.374209Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.150.144:2380"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375497Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8f4442f54c46fb8d","initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0610 12:32:16.867993    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375805Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d is starting a new election at term 2"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.50539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became pre-candidate at term 2"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgPreVoteResp from 8f4442f54c46fb8d at term 2"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became candidate at term 3"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgVoteResp from 8f4442f54c46fb8d at term 3"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 3"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 3"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.511486Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.150.144:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.517481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520973Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.543402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.150.144:2379"}
	I0610 12:32:16.874308    8536 logs.go:123] Gathering logs for coredns [f2e39052db19] ...
	I0610 12:32:16.874308    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e39052db19"
	I0610 12:32:16.905717    8536 command_runner.go:130] > .:53
	I0610 12:32:16.905778    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:16.905778    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:16.905778    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:16.905841    8536 command_runner.go:130] > [INFO] 127.0.0.1:46276 - 35337 "HINFO IN 965239639799927989.3587586823131848737. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.052340371s
	I0610 12:32:16.905841    8536 command_runner.go:130] > [INFO] 10.244.1.2:36040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003047s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.1.2:51901 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.165635405s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.1.2:38890 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.065664181s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.1.2:40219 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.107303974s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.0.3:38184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002396s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.0.3:57966 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001307s
	I0610 12:32:16.905952    8536 command_runner.go:130] > [INFO] 10.244.0.3:38338 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002131s
	I0610 12:32:16.906008    8536 command_runner.go:130] > [INFO] 10.244.0.3:41898 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000121s
	I0610 12:32:16.906008    8536 command_runner.go:130] > [INFO] 10.244.1.2:49043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200101s
	I0610 12:32:16.906008    8536 command_runner.go:130] > [INFO] 10.244.1.2:53918 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.147842886s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:50531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001726s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:41881 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001246s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:34708 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.030026838s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002834s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:58166 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001901s
	I0610 12:32:16.906193    8536 command_runner.go:130] > [INFO] 10.244.1.2:46174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001048s
	I0610 12:32:16.906399    8536 command_runner.go:130] > [INFO] 10.244.0.3:52212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003513s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	I0610 12:32:16.906589    8536 command_runner.go:130] > [INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	I0610 12:32:16.906589    8536 command_runner.go:130] > [INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	I0610 12:32:16.906708    8536 command_runner.go:130] > [INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	I0610 12:32:16.906836    8536 command_runner.go:130] > [INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	I0610 12:32:16.906877    8536 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	I0610 12:32:16.906910    8536 command_runner.go:130] > [INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	I0610 12:32:16.906910    8536 command_runner.go:130] > [INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	I0610 12:32:16.906946    8536 command_runner.go:130] > [INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	I0610 12:32:16.907003    8536 command_runner.go:130] > [INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	I0610 12:32:16.907003    8536 command_runner.go:130] > [INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	I0610 12:32:16.907003    8536 command_runner.go:130] > [INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	I0610 12:32:16.907047    8536 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0610 12:32:16.907047    8536 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0610 12:32:16.910637    8536 logs.go:123] Gathering logs for kube-controller-manager [f1409bf44ff1] ...
	I0610 12:32:16.910637    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1409bf44ff1"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:55.502430       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.114557       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.114858       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.117078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.117365       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.118392       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.118623       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.413505       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.413532       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.454038       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.454303       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.454341       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.474947       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.475105       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.475116       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.514703       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:10.509914       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:10.510020       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:16.939467    8536 command_runner.go:130] ! I0610 12:08:10.511115       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:16.939467    8536 command_runner.go:130] ! I0610 12:08:10.511148       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:16.939527    8536 command_runner.go:130] ! I0610 12:08:10.515475       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:16.939527    8536 command_runner.go:130] ! I0610 12:08:10.515547       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:16.939527    8536 command_runner.go:130] ! I0610 12:08:10.516308       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:16.939527    8536 command_runner.go:130] ! I0610 12:08:10.516334       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:16.939593    8536 command_runner.go:130] ! I0610 12:08:10.516340       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:16.939593    8536 command_runner.go:130] ! I0610 12:08:10.531416       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:16.939593    8536 command_runner.go:130] ! I0610 12:08:10.531434       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:16.939652    8536 command_runner.go:130] ! I0610 12:08:10.531293       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:16.939652    8536 command_runner.go:130] ! I0610 12:08:10.543960       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:16.939652    8536 command_runner.go:130] ! I0610 12:08:10.544630       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.544667       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.567000       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.567602       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.568240       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.586627       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.587637       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:16.939863    8536 command_runner.go:130] ! I0610 12:08:10.587654       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:16.939892    8536 command_runner.go:130] ! I0610 12:08:10.623685       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:16.939930    8536 command_runner.go:130] ! I0610 12:08:10.623975       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.624342       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.639985       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.640617       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.640810       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.702326       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.706246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.711937       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.712131       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.712146       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.712235       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.712265       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.724980       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.726393       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.726653       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.742390       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.743099       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.744498       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.759177       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.759262       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.759917       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.759932       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.901245       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.903470       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.903502       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.064066       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.064123       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.064135       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.202164       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.202227       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.202239       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:16.940753    8536 command_runner.go:130] ! I0610 12:08:11.352380       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:16.940816    8536 command_runner.go:130] ! I0610 12:08:11.352546       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:16.940847    8536 command_runner.go:130] ! I0610 12:08:11.352575       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:16.940847    8536 command_runner.go:130] ! I0610 12:08:11.656918       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:16.940888    8536 command_runner.go:130] ! I0610 12:08:11.657560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:16.940888    8536 command_runner.go:130] ! I0610 12:08:11.657950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:16.940923    8536 command_runner.go:130] ! I0610 12:08:11.658269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658785       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658849       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658870       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658895       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.659004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.659056       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:16.941493    8536 command_runner.go:130] ! W0610 12:08:11.659073       1 shared_informer.go:597] resyncPeriod 13h6m28.341601393s is smaller than resyncCheckPeriod 19h0m49.916968618s and the informer has already started. Changing it to 19h0m49.916968618s
	I0610 12:32:16.941560    8536 command_runner.go:130] ! I0610 12:08:11.659195       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:16.941560    8536 command_runner.go:130] ! I0610 12:08:11.659214       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:16.941617    8536 command_runner.go:130] ! I0610 12:08:11.659236       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:16.941669    8536 command_runner.go:130] ! I0610 12:08:11.659287       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.659312       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.659579       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.659591       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.659608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.895313       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.895383       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.895693       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.896490       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.154521       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.154576       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.154658       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.154690       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.301351       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.301495       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.301508       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.495309       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.495425       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.495645       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.495683       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:16.941699    8536 command_runner.go:130] ! E0610 12:08:12.550245       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.550671       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! E0610 12:08:12.700493       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.700528       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:16.942439    8536 command_runner.go:130] ! I0610 12:08:12.700538       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:16.942494    8536 command_runner.go:130] ! I0610 12:08:12.859280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:16.942535    8536 command_runner.go:130] ! I0610 12:08:12.859580       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:16.942535    8536 command_runner.go:130] ! I0610 12:08:12.859953       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:16.942602    8536 command_runner.go:130] ! I0610 12:08:12.906626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:16.942641    8536 command_runner.go:130] ! I0610 12:08:12.907724       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:16.942696    8536 command_runner.go:130] ! I0610 12:08:13.050431       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:16.942729    8536 command_runner.go:130] ! I0610 12:08:13.050510       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:16.942800    8536 command_runner.go:130] ! I0610 12:08:13.205885       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:16.943076    8536 command_runner.go:130] ! I0610 12:08:13.205970       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:16.943123    8536 command_runner.go:130] ! I0610 12:08:13.205982       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:16.943176    8536 command_runner.go:130] ! I0610 12:08:13.351713       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:16.943176    8536 command_runner.go:130] ! I0610 12:08:13.351815       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:16.943264    8536 command_runner.go:130] ! I0610 12:08:13.351830       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:16.943315    8536 command_runner.go:130] ! I0610 12:08:13.603420       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:16.943315    8536 command_runner.go:130] ! I0610 12:08:13.603498       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:16.943315    8536 command_runner.go:130] ! I0610 12:08:13.603510       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:16.943404    8536 command_runner.go:130] ! I0610 12:08:13.750262       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.750789       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.750809       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.900118       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.900639       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.900897       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:16.943588    8536 command_runner.go:130] ! I0610 12:08:14.054008       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.054156       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.054170       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.199527       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.199627       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.199683       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.199694       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.351474       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.352193       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.352213       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502148       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502250       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502262       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502696       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502825       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.546684       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.547077       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.547608       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.547097       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.944119    8536 command_runner.go:130] ! I0610 12:08:14.547127       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:16.945670    8536 command_runner.go:130] ! I0610 12:08:14.547931       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:16.945732    8536 command_runner.go:130] ! I0610 12:08:14.547138       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.945784    8536 command_runner.go:130] ! I0610 12:08:14.547188       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.548434       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.547199       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.547257       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.548692       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.547265       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.558628       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.590023       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:16.946749    8536 command_runner.go:130] ! I0610 12:08:14.600506       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:16.946749    8536 command_runner.go:130] ! I0610 12:08:14.602694       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:16.946776    8536 command_runner.go:130] ! I0610 12:08:14.603324       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:16.946776    8536 command_runner.go:130] ! I0610 12:08:14.609611       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:16.946824    8536 command_runner.go:130] ! I0610 12:08:14.612038       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:16.946824    8536 command_runner.go:130] ! I0610 12:08:14.623629       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:16.946824    8536 command_runner.go:130] ! I0610 12:08:14.624495       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:16.946824    8536 command_runner.go:130] ! I0610 12:08:14.612329       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.628289       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.630516       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.630648       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.622860       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.627541       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:16.946964    8536 command_runner.go:130] ! I0610 12:08:14.627554       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:16.946964    8536 command_runner.go:130] ! I0610 12:08:14.627562       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:16.947022    8536 command_runner.go:130] ! I0610 12:08:14.627813       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:16.947022    8536 command_runner.go:130] ! I0610 12:08:14.631141       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:16.947069    8536 command_runner.go:130] ! I0610 12:08:14.631364       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:16.947069    8536 command_runner.go:130] ! I0610 12:08:14.631669       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:16.947069    8536 command_runner.go:130] ! I0610 12:08:14.631834       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:16.947069    8536 command_runner.go:130] ! I0610 12:08:14.642451       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:16.947134    8536 command_runner.go:130] ! I0610 12:08:14.644828       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:16.947134    8536 command_runner.go:130] ! I0610 12:08:14.645380       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:16.947177    8536 command_runner.go:130] ! I0610 12:08:14.647678       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:16.947177    8536 command_runner.go:130] ! I0610 12:08:14.648798       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:16.947219    8536 command_runner.go:130] ! I0610 12:08:14.648809       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:16.947219    8536 command_runner.go:130] ! I0610 12:08:14.648848       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:16.947273    8536 command_runner.go:130] ! I0610 12:08:14.656075       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:16.947273    8536 command_runner.go:130] ! I0610 12:08:14.656781       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:16.947273    8536 command_runner.go:130] ! I0610 12:08:14.657449       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:16.947342    8536 command_runner.go:130] ! I0610 12:08:14.657643       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:16.947342    8536 command_runner.go:130] ! I0610 12:08:14.658125       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:16.947342    8536 command_runner.go:130] ! I0610 12:08:14.661079       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:16.947400    8536 command_runner.go:130] ! I0610 12:08:14.668926       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:16.947400    8536 command_runner.go:130] ! I0610 12:08:14.675620       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:16.947440    8536 command_runner.go:130] ! I0610 12:08:14.680953       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300" podCIDRs=["10.244.0.0/24"]
	I0610 12:32:16.947440    8536 command_runner.go:130] ! I0610 12:08:14.687842       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:16.947502    8536 command_runner.go:130] ! I0610 12:08:14.751377       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:16.947502    8536 command_runner.go:130] ! I0610 12:08:14.754827       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:16.947502    8536 command_runner.go:130] ! I0610 12:08:14.795731       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:16.947557    8536 command_runner.go:130] ! I0610 12:08:14.803976       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:16.947557    8536 command_runner.go:130] ! I0610 12:08:14.807376       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:16.947557    8536 command_runner.go:130] ! I0610 12:08:14.807800       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:16.947557    8536 command_runner.go:130] ! I0610 12:08:14.851108       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:16.947611    8536 command_runner.go:130] ! I0610 12:08:14.858915       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:16.947611    8536 command_runner.go:130] ! I0610 12:08:14.859692       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:16.947611    8536 command_runner.go:130] ! I0610 12:08:14.864873       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:16.947686    8536 command_runner.go:130] ! I0610 12:08:15.295934       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:16.947686    8536 command_runner.go:130] ! I0610 12:08:15.296041       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:16.947686    8536 command_runner.go:130] ! I0610 12:08:15.332772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:16.947726    8536 command_runner.go:130] ! I0610 12:08:15.887603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="329.520484ms"
	I0610 12:32:16.947726    8536 command_runner.go:130] ! I0610 12:08:16.024148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.478301ms"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.151441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.784808ms"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.151859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="288.402µs"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.577624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.03545ms"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.593339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.556101ms"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.593508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.3µs"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:30.535681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130µs"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:30.566310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.4µs"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:16.948054    8536 command_runner.go:130] ! I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:32:16.948054    8536 command_runner.go:130] ! I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:16.948054    8536 command_runner.go:130] ! I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:16.948118    8536 command_runner.go:130] ! I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:32:16.948162    8536 command_runner.go:130] ! I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:32:16.948162    8536 command_runner.go:130] ! I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:32:16.948162    8536 command_runner.go:130] ! I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:32:16.948228    8536 command_runner.go:130] ! I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:32:16.948228    8536 command_runner.go:130] ! I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:32:16.948273    8536 command_runner.go:130] ! I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:32:16.948310    8536 command_runner.go:130] ! I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:32:16.948310    8536 command_runner.go:130] ! I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	I0610 12:32:16.948352    8536 command_runner.go:130] ! I0610 12:25:52.678571       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:16.948388    8536 command_runner.go:130] ! I0610 12:25:52.681612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:16.948421    8536 command_runner.go:130] ! I0610 12:25:52.698797       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m03" podCIDRs=["10.244.2.0/24"]
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:25:54.878967       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:26:13.380155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:27:44.944679       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:28:15.516170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.644756ms"
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:28:15.516815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.1µs"
	I0610 12:32:16.969018    8536 logs.go:123] Gathering logs for container status ...
	I0610 12:32:16.969018    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 12:32:17.043020    8536 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0610 12:32:17.043184    8536 command_runner.go:130] > b9550940a81ca       8c811b4aec35f                                                                                         13 seconds ago       Running             busybox                   1                   c4d124cebb3b3       busybox-fc5497c4f-z28tq
	I0610 12:32:17.043184    8536 command_runner.go:130] > 24f3f7e041f98       cbb01a7bd410d                                                                                         13 seconds ago       Running             coredns                   1                   241c4811748fa       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:17.043281    8536 command_runner.go:130] > e934ffe0f9032       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   2dd9b423841c9       storage-provisioner
	I0610 12:32:17.043318    8536 command_runner.go:130] > c3c4316beca64       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   0c19b39e15f6a       kindnet-29gbv
	I0610 12:32:17.043318    8536 command_runner.go:130] > cc9dbe4aa4005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   2dd9b423841c9       storage-provisioner
	I0610 12:32:17.043371    8536 command_runner.go:130] > 1de5fa0ef8384       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   06d997d7c306c       kube-proxy-nrpvt
	I0610 12:32:17.043406    8536 command_runner.go:130] > d7941126134f2       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   5c3da3b59b527       kube-apiserver-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > 877ee07c14997       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   b13c0058ce265       etcd-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > d90e72ef46704       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   8902dac03acbc       kube-scheduler-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > 3bee53d5fef91       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   f56cc8af37db0       kube-controller-manager-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > 91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	I0610 12:32:17.043430    8536 command_runner.go:130] > f2e39052db195       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:17.043430    8536 command_runner.go:130] > c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	I0610 12:32:17.043430    8536 command_runner.go:130] > afad8b05897e5       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	I0610 12:32:17.043430    8536 command_runner.go:130] > bd1a6cd987430       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > f1409bf44ff14       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	I0610 12:32:17.046097    8536 logs.go:123] Gathering logs for kubelet ...
	I0610 12:32:17.046139    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 12:32:17.080844    8536 command_runner.go:130] > Jun 10 12:30:48 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081485    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322075    1392 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:17.081570    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322142    1392 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.324143    1392 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: E0610 12:30:49.325228    1392 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078361    1448 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078445    1448 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078696    1448 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: E0610 12:30:50.078819    1448 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:53 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021338    1528 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021853    1528 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.022286    1528 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.024650    1528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.040752    1528 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.082883    1528 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.083180    1528 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085143    1528 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085256    1528 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-813300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.086924    1528 topology_manager.go:138] "Creating topology manager with none policy"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.087122    1528 container_manager_linux.go:301] "Creating device plugin manager"
	I0610 12:32:17.082192    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.088486    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:17.082192    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.090915    1528 kubelet.go:400] "Attempting to sync node with API server"
	I0610 12:32:17.082243    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091108    1528 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0610 12:32:17.082243    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091402    1528 kubelet.go:312] "Adding apiserver pod source"
	I0610 12:32:17.082282    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.092259    1528 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0610 12:32:17.082282    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.097253    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.082351    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.097520    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.082392    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.099693    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.082443    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.099740    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.082484    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.099843    1528 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0610 12:32:17.082563    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.102710    1528 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0610 12:32:17.082563    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.103981    1528 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0610 12:32:17.082563    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.107194    1528 server.go:1264] "Started kubelet"
	I0610 12:32:17.082609    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.120692    1528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0610 12:32:17.082655    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.122088    1528 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0610 12:32:17.082703    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.125028    1528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0610 12:32:17.082731    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.128857    1528 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0610 12:32:17.082731    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.132449    1528 server.go:455] "Adding debug handlers to kubelet server"
	I0610 12:32:17.082781    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.124281    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:17.082838    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.137444    1528 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0610 12:32:17.082838    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.139221    1528 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0610 12:32:17.082838    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.141909    1528 factory.go:221] Registration of the systemd container factory successfully
	I0610 12:32:17.082902    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147241    1528 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0610 12:32:17.082902    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147375    1528 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0610 12:32:17.082969    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.144942    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="200ms"
	I0610 12:32:17.082969    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.143108    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.083064    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.154145    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.083064    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.179909    1528 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0610 12:32:17.083124    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180022    1528 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0610 12:32:17.083124    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180086    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:17.083124    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181162    1528 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0610 12:32:17.083124    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181233    1528 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0610 12:32:17.083193    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181261    1528 policy_none.go:49] "None policy: Start"
	I0610 12:32:17.083193    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.192385    1528 reconciler.go:26] "Reconciler: start to sync state"
	I0610 12:32:17.083193    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193179    1528 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0610 12:32:17.083193    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193256    1528 state_mem.go:35] "Initializing new in-memory state store"
	I0610 12:32:17.083266    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193830    1528 state_mem.go:75] "Updated machine memory state"
	I0610 12:32:17.083266    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.197194    1528 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0610 12:32:17.083266    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.204265    1528 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0610 12:32:17.083266    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.219894    1528 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0610 12:32:17.083361    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.226098    1528 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-813300\" not found"
	I0610 12:32:17.083361    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.226649    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0610 12:32:17.083361    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.230123    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0610 12:32:17.083435    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231021    1528 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0610 12:32:17.083435    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231133    1528 kubelet.go:2337] "Starting kubelet main sync loop"
	I0610 12:32:17.083435    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.231189    1528 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0610 12:32:17.083499    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.244084    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.083499    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.247037    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.083562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.247227    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.083562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.253607    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:17.083562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.255809    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:17.083562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:17.083644    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:17.083702    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:17.083702    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:17.083814    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334683    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62db1c721951a36c62a6369a30c651a661eb2871f8363fa341ef8ad7b7080a07"
	I0610 12:32:17.083814    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334742    1528 topology_manager.go:215] "Topology Admit Handler" podUID="180cf4cc399d604c28cc4df1442ebd5a" podNamespace="kube-system" podName="kube-apiserver-multinode-813300"
	I0610 12:32:17.083814    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.336338    1528 topology_manager.go:215] "Topology Admit Handler" podUID="37865ce1914dc04a4a0a25e98b80ce35" podNamespace="kube-system" podName="kube-controller-manager-multinode-813300"
	I0610 12:32:17.083941    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.338106    1528 topology_manager.go:215] "Topology Admit Handler" podUID="4d9c84710aef19c4449f4b7691d0af07" podNamespace="kube-system" podName="kube-scheduler-multinode-813300"
	I0610 12:32:17.083977    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340794    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7d28a97ba1c48cbe8edd3eab76f64cdcdebf920a03921644f63d12856b642f0"
	I0610 12:32:17.084043    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340848    1528 topology_manager.go:215] "Topology Admit Handler" podUID="76e8893277ba7cea6624561880496e47" podNamespace="kube-system" podName="etcd-multinode-813300"
	I0610 12:32:17.084113    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.341927    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04d7b3d4fcc648cd6b447a383defba86200f1071acc892670457ebeebb52f22"
	I0610 12:32:17.084113    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.342208    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bc6043f7b92f091f4ceee7db3e11617072391c6e5303f4ecdafdb06d4b585a"
	I0610 12:32:17.084113    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.356667    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="400ms"
	I0610 12:32:17.084113    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.365771    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c"
	I0610 12:32:17.084229    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.380268    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b6aa9a0e1d1cbcee858808fc74f396cfba20777f2316093484920397e9b4ca"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397846    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-ca-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397877    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397922    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-flexvolume-dir\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397961    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-k8s-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397979    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-kubeconfig\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398000    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-data\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398019    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-k8s-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398038    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-ca-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398055    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d9c84710aef19c4449f4b7691d0af07-kubeconfig\") pod \"kube-scheduler-multinode-813300\" (UID: \"4d9c84710aef19c4449f4b7691d0af07\") " pod="kube-system/kube-scheduler-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398073    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-certs\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.400870    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.416196    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689b8976cc0293bf6ae2ffaf7abbe0a59cfa7521907fd652e86da3912515d25d"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.442360    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10e49596de5e51f9986bebf2105f07084a083e5e8c2ab50684531210b032662"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.454932    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.456598    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.759421    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="800ms"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.858477    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.859580    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.205231    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.205310    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.248476    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.249836    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085115    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.406658    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085115    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.406731    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085115    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.487592    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99"
	I0610 12:32:17.085213    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.561164    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="1.6s"
	I0610 12:32:17.085213    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.661352    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.085313    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.663943    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:17.085313    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.751130    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085388    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.751205    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085465    8536 command_runner.go:130] > Jun 10 12:30:56 multinode-813300 kubelet[1528]: E0610 12:30:56.215699    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:17.085465    8536 command_runner.go:130] > Jun 10 12:30:57 multinode-813300 kubelet[1528]: I0610 12:30:57.265569    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.085465    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636898    1528 kubelet_node_status.go:112] "Node was previously registered" node="multinode-813300"
	I0610 12:32:17.085465    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636993    1528 kubelet_node_status.go:76] "Successfully registered node" node="multinode-813300"
	I0610 12:32:17.085545    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.638685    1528 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0610 12:32:17.085545    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639257    1528 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639985    1528 setters.go:580] "Node became not ready" node="multinode-813300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-10T12:30:59Z","lastTransitionTime":"2024-06-10T12:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.103240    1528 apiserver.go:52] "Watching apiserver"
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109200    1528 topology_manager.go:215] "Topology Admit Handler" podUID="40bf0aff-00b2-40c7-bed7-52b8cadbc3a1" podNamespace="kube-system" podName="kube-proxy-nrpvt"
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109472    1528 topology_manager.go:215] "Topology Admit Handler" podUID="aad8124e-6c05-4719-9adb-edc11b3cce42" podNamespace="kube-system" podName="kindnet-29gbv"
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109721    1528 topology_manager.go:215] "Topology Admit Handler" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kbhvv"
	I0610 12:32:17.085818    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109954    1528 topology_manager.go:215] "Topology Admit Handler" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e" podNamespace="kube-system" podName="storage-provisioner"
	I0610 12:32:17.085818    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110077    1528 topology_manager.go:215] "Topology Admit Handler" podUID="3191c71a-8c87-4390-8232-8653f494d1f0" podNamespace="default" podName="busybox-fc5497c4f-z28tq"
	I0610 12:32:17.085911    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.110308    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.085911    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110641    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-813300" podUID="f824b391-b3d2-49ec-ba7d-863cb2150f81"
	I0610 12:32:17.085911    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.111896    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:17.085996    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.115871    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.085996    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.147565    1528 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0610 12:32:17.086098    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.155423    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:17.086098    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160314    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6dfedc3-d6ff-412c-8a13-40a493c4199e-tmp\") pod \"storage-provisioner\" (UID: \"f6dfedc3-d6ff-412c-8a13-40a493c4199e\") " pod="kube-system/storage-provisioner"
	I0610 12:32:17.086177    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160428    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-cni-cfg\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:17.086177    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-xtables-lock\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:17.086255    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161224    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-xtables-lock\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:17.086255    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161359    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-lib-modules\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:17.086333    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162089    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.086333    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162182    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.662151031 +0000 UTC m=+6.753273950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.086414    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.162238    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-lib-modules\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:17.086414    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.175000    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-813300"
	I0610 12:32:17.086414    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.186991    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086491    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187290    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086568    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.687498638 +0000 UTC m=+6.778621457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086568    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.246331    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f80d01e953cc664fc05c397fdad000" path="/var/lib/kubelet/pods/93f80d01e953cc664fc05c397fdad000/volumes"
	I0610 12:32:17.086568    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.248399    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa7bd9cfb361baaed8d7d5729a6c77c" path="/var/lib/kubelet/pods/baa7bd9cfb361baaed8d7d5729a6c77c/volumes"
	I0610 12:32:17.086647    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.316426    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-813300" podStartSLOduration=0.316407314 podStartE2EDuration="316.407314ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.316147208 +0000 UTC m=+6.407270027" watchObservedRunningTime="2024-06-10 12:31:00.316407314 +0000 UTC m=+6.407530233"
	I0610 12:32:17.086722    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.439081    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-813300" podStartSLOduration=0.439018164 podStartE2EDuration="439.018164ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.409703778 +0000 UTC m=+6.500826597" watchObservedRunningTime="2024-06-10 12:31:00.439018164 +0000 UTC m=+6.530141083"
	I0610 12:32:17.086722    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.631684    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:17.086799    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667882    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.086799    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667966    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.667947638 +0000 UTC m=+7.759070557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.086878    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769226    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086878    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769334    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086955    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769428    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.769408565 +0000 UTC m=+7.860531384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086955    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.231939    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.087032    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.679952    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.087032    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.680142    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.680120563 +0000 UTC m=+9.771243482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.087110    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.781772    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087110    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782050    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087186    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782132    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.7821123 +0000 UTC m=+9.873235219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087277    8536 command_runner.go:130] > Jun 10 12:31:02 multinode-813300 kubelet[1528]: E0610 12:31:02.234039    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.087277    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.232296    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.087353    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.701884    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.087353    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.702058    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.702037863 +0000 UTC m=+13.793160782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.087433    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802160    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087433    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802233    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087517    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802292    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.802272966 +0000 UTC m=+13.893395785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087517    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.207349    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.087611    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.238069    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.087611    8536 command_runner.go:130] > Jun 10 12:31:05 multinode-813300 kubelet[1528]: E0610 12:31:05.232753    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.087707    8536 command_runner.go:130] > Jun 10 12:31:06 multinode-813300 kubelet[1528]: E0610 12:31:06.233804    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.087707    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.231988    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.087805    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736592    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.087805    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736825    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.736801176 +0000 UTC m=+21.827923995 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.087887    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837037    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087887    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837146    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087968    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837219    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.837199504 +0000 UTC m=+21.928322423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.088003    8536 command_runner.go:130] > Jun 10 12:31:08 multinode-813300 kubelet[1528]: E0610 12:31:08.232310    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.208416    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.231620    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:10 multinode-813300 kubelet[1528]: E0610 12:31:10.233882    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:11 multinode-813300 kubelet[1528]: E0610 12:31:11.232126    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:12 multinode-813300 kubelet[1528]: E0610 12:31:12.233695    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:13 multinode-813300 kubelet[1528]: E0610 12:31:13.231660    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.210433    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.234870    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.232790    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816637    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816990    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.816931565 +0000 UTC m=+37.908054384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918429    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918619    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918694    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.918675278 +0000 UTC m=+38.009798097 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:16 multinode-813300 kubelet[1528]: E0610 12:31:16.234954    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:17 multinode-813300 kubelet[1528]: E0610 12:31:17.231668    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088632    8536 command_runner.go:130] > Jun 10 12:31:18 multinode-813300 kubelet[1528]: E0610 12:31:18.232656    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088632    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.214153    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088632    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.231639    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088632    8536 command_runner.go:130] > Jun 10 12:31:20 multinode-813300 kubelet[1528]: E0610 12:31:20.234429    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088788    8536 command_runner.go:130] > Jun 10 12:31:21 multinode-813300 kubelet[1528]: E0610 12:31:21.232080    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:22 multinode-813300 kubelet[1528]: E0610 12:31:22.232638    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:23 multinode-813300 kubelet[1528]: E0610 12:31:23.233105    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.216593    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.233280    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:25 multinode-813300 kubelet[1528]: E0610 12:31:25.232513    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:26 multinode-813300 kubelet[1528]: E0610 12:31:26.232337    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:27 multinode-813300 kubelet[1528]: E0610 12:31:27.233152    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:28 multinode-813300 kubelet[1528]: E0610 12:31:28.234103    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.218816    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.232070    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:30 multinode-813300 kubelet[1528]: E0610 12:31:30.231766    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.231673    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884791    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884975    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.884956587 +0000 UTC m=+69.976079506 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.089417    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985181    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.089417    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985216    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.089417    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.98525598 +0000 UTC m=+70.076378799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.089417    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.232018    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.476305    1528 scope.go:117] "RemoveContainer" containerID="d32ce22e31b06bacb7530f3513c1f864d77685269868404ad7c71a4f15d91e41"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.477175    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.477659    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f6dfedc3-d6ff-412c-8a13-40a493c4199e)\"" pod="kube-system/storage-provisioner" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:33 multinode-813300 kubelet[1528]: E0610 12:31:33.232631    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 kubelet[1528]: I0610 12:31:47.231895    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.214930    1528 scope.go:117] "RemoveContainer" containerID="34b9299d74e382eadb8e7df1029506efc87e283ac8b38024d9524b8bb815f705"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: E0610 12:31:54.266189    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.275663    1528 scope.go:117] "RemoveContainer" containerID="ba52603f8387590319a4d5a9511265065e2f90bff6628bec2f622754e034c70a"
	I0610 12:32:19.643034    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:32:19.643034    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:19.643034    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:19.643034    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:19.650110    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:32:19.650110    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:19.650110    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:19.650110    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:19 GMT
	I0610 12:32:19.650110    8536 round_trippers.go:580]     Audit-Id: ffda0f58-706e-4237-be51-4f259c9a61a6
	I0610 12:32:19.650110    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:19.650110    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:19.650110    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:19.652085    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1841"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1827","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0610 12:32:19.656195    8536 system_pods.go:59] 12 kube-system pods found
	I0610 12:32:19.656195    8536 system_pods.go:61] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "etcd-multinode-813300" [f9259e5e-61e9-4252-b7c6-de5d499eb9c1] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kindnet-2pc4j" [966ce4c1-e9ee-48d6-9e52-98143fa03e67] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kindnet-r4nfq" [dceb3d20-8d04-4408-927f-1c195558dd19] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-apiserver-multinode-813300" [2cf29b2c-a2a9-46ec-bbc8-fe884e97df06] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-proxy-rx2b2" [ce59a99b-a561-4598-9399-147f748433a2] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-proxy-vw56h" [f3f9e738-89d2-4776-a212-a1ca28952f7c] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:32:19.656195    8536 system_pods.go:74] duration metric: took 3.8088891s to wait for pod list to return data ...
	I0610 12:32:19.656195    8536 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:32:19.656428    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/default/serviceaccounts
	I0610 12:32:19.656428    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:19.656428    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:19.656428    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:19.659009    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:19.660013    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Audit-Id: 87f7f8e9-1525-4e4f-affd-e17bc21a5585
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:19.660013    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:19.660013    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Content-Length: 262
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:19 GMT
	I0610 12:32:19.660013    8536 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1841"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2033967b-ff48-4641-b518-45705bf023c6","resourceVersion":"336","creationTimestamp":"2024-06-10T12:08:15Z"}}]}
	I0610 12:32:19.660013    8536 default_sa.go:45] found service account: "default"
	I0610 12:32:19.660013    8536 default_sa.go:55] duration metric: took 3.8179ms for default service account to be created ...
	I0610 12:32:19.660013    8536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:32:19.660013    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:32:19.660013    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:19.660013    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:19.660013    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:19.666359    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:32:19.666359    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:19.666359    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:19 GMT
	I0610 12:32:19.666359    8536 round_trippers.go:580]     Audit-Id: f3197283-3d72-4030-83e3-14ba38baaa31
	I0610 12:32:19.666541    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:19.666541    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:19.666541    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:19.666541    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:19.668047    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1841"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1827","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0610 12:32:19.672258    8536 system_pods.go:86] 12 kube-system pods found
	I0610 12:32:19.672258    8536 system_pods.go:89] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "etcd-multinode-813300" [f9259e5e-61e9-4252-b7c6-de5d499eb9c1] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kindnet-2pc4j" [966ce4c1-e9ee-48d6-9e52-98143fa03e67] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kindnet-r4nfq" [dceb3d20-8d04-4408-927f-1c195558dd19] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-apiserver-multinode-813300" [2cf29b2c-a2a9-46ec-bbc8-fe884e97df06] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-proxy-rx2b2" [ce59a99b-a561-4598-9399-147f748433a2] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-proxy-vw56h" [f3f9e738-89d2-4776-a212-a1ca28952f7c] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:32:19.672874    8536 system_pods.go:126] duration metric: took 12.2449ms to wait for k8s-apps to be running ...
	I0610 12:32:19.672874    8536 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:32:19.683797    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:32:19.712741    8536 system_svc.go:56] duration metric: took 39.8671ms WaitForService to wait for kubelet
	I0610 12:32:19.712824    8536 kubeadm.go:576] duration metric: took 1m15.1914955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:32:19.712824    8536 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:32:19.713019    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes
	I0610 12:32:19.713087    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:19.713087    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:19.713087    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:19.717280    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:19.717280    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:19.717280    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:19.718122    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:19.718122    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:19 GMT
	I0610 12:32:19.718122    8536 round_trippers.go:580]     Audit-Id: d3e3a9ae-bd4b-4b22-8e97-6e4006f75bc3
	I0610 12:32:19.718122    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:19.718122    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:19.718615    8536 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1841"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16273 chars]
	I0610 12:32:19.719336    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:32:19.719336    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:32:19.719336    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:32:19.719336    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:32:19.719336    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:32:19.719336    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:32:19.719336    8536 node_conditions.go:105] duration metric: took 6.5117ms to run NodePressure ...
	I0610 12:32:19.719336    8536 start.go:240] waiting for startup goroutines ...
	I0610 12:32:19.719336    8536 start.go:245] waiting for cluster config update ...
	I0610 12:32:19.719336    8536 start.go:254] writing updated cluster config ...
	I0610 12:32:19.726596    8536 out.go:177] 
	I0610 12:32:19.729829    8536 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:32:19.740405    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:32:19.740405    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:32:19.746414    8536 out.go:177] * Starting "multinode-813300-m02" worker node in "multinode-813300" cluster
	I0610 12:32:19.748409    8536 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:32:19.748409    8536 cache.go:56] Caching tarball of preloaded images
	I0610 12:32:19.748409    8536 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:32:19.749413    8536 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:32:19.749413    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:32:19.751404    8536 start.go:360] acquireMachinesLock for multinode-813300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:32:19.751404    8536 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300-m02"
	I0610 12:32:19.751404    8536 start.go:96] Skipping create...Using existing machine configuration
	I0610 12:32:19.751404    8536 fix.go:54] fixHost starting: m02
	I0610 12:32:19.752518    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:22.145001    8536 main.go:141] libmachine: [stdout =====>] : Off
	
	I0610 12:32:22.145001    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:22.145001    8536 fix.go:112] recreateIfNeeded on multinode-813300-m02: state=Stopped err=<nil>
	W0610 12:32:22.145478    8536 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 12:32:22.150442    8536 out.go:177] * Restarting existing hyperv VM for "multinode-813300-m02" ...
	I0610 12:32:22.157329    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300-m02
	I0610 12:32:25.569690    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:25.570595    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:25.570595    8536 main.go:141] libmachine: Waiting for host to start...
	I0610 12:32:25.570666    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:28.042542    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:28.042542    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:28.042542    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:30.783114    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:30.783598    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:31.795751    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:34.187966    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:34.187966    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:34.188800    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:36.979297    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:36.979297    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:37.994728    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:40.374953    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:40.375046    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:40.375046    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:43.144200    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:43.144200    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:44.155496    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:46.557278    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:46.557660    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:46.557727    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:49.306332    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:49.306332    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:50.318623    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:52.761527    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:52.761527    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:52.761527    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:55.546956    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:32:55.546956    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:55.550287    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:57.879214    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:57.879214    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:57.880290    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:00.654758    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:00.655199    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:00.655199    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:33:00.658245    8536 machine.go:94] provisionDockerMachine start ...
	I0610 12:33:00.658321    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:02.977040    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:02.977040    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:02.977040    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:05.730578    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:05.730578    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:05.736885    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:05.737482    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:05.737482    8536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:33:05.878041    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:33:05.878097    8536 buildroot.go:166] provisioning hostname "multinode-813300-m02"
	I0610 12:33:05.878153    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:08.230250    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:08.230250    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:08.230250    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:11.059855    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:11.060105    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:11.065817    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:11.066491    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:11.066491    8536 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300-m02 && echo "multinode-813300-m02" | sudo tee /etc/hostname
	I0610 12:33:11.233601    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300-m02
	
	I0610 12:33:11.233601    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:13.602050    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:13.602050    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:13.602050    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:16.449433    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:16.450208    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:16.455912    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:16.456589    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:16.456589    8536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:33:16.608087    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:33:16.608087    8536 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:33:16.608625    8536 buildroot.go:174] setting up certificates
	I0610 12:33:16.608625    8536 provision.go:84] configureAuth start
	I0610 12:33:16.608711    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:18.978716    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:18.979390    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:18.979448    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:21.793174    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:21.793367    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:21.793456    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:24.178467    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:24.178467    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:24.178828    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:26.969751    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:26.969751    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:26.969751    8536 provision.go:143] copyHostCerts
	I0610 12:33:26.969905    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:33:26.969905    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:33:26.969905    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:33:26.970807    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:33:26.971863    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:33:26.972385    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:33:26.972385    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:33:26.972758    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:33:26.973731    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:33:26.973731    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:33:26.973731    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:33:26.974400    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:33:26.975104    8536 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300-m02 san=[127.0.0.1 172.17.144.123 localhost minikube multinode-813300-m02]
	I0610 12:33:27.303350    8536 provision.go:177] copyRemoteCerts
	I0610 12:33:27.315963    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:33:27.315963    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:29.645541    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:29.645614    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:29.645614    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:32.416141    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:32.416338    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:32.416338    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:33:32.525224    8536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2092191s)
	I0610 12:33:32.525224    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:33:32.525224    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:33:32.575432    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:33:32.575996    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 12:33:32.631616    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:33:32.632313    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 12:33:32.686553    8536 provision.go:87] duration metric: took 16.0777996s to configureAuth
	I0610 12:33:32.686553    8536 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:33:32.687351    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:33:32.687483    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:34.999631    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:34.999631    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:34.999631    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:37.730266    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:37.730266    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:37.735498    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:37.735736    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:37.735736    8536 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:33:37.866123    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:33:37.866123    8536 buildroot.go:70] root file system type: tmpfs
	I0610 12:33:37.866123    8536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:33:37.866656    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:40.179815    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:40.179815    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:40.179815    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:42.997911    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:42.997911    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:43.003673    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:43.003673    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:43.004229    8536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.150.144"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:33:43.180023    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.150.144
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:33:43.180113    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:45.547268    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:45.547268    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:45.548022    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:48.274426    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:48.274426    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:48.280050    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:48.280110    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:48.280110    8536 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:33:50.776530    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:33:50.776530    8536 machine.go:97] duration metric: took 50.1178082s to provisionDockerMachine
	I0610 12:33:50.776530    8536 start.go:293] postStartSetup for "multinode-813300-m02" (driver="hyperv")
	I0610 12:33:50.776530    8536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:33:50.789531    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:33:50.789531    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:53.104831    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:53.105368    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:53.105368    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:55.919870    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:55.919870    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:55.921188    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:33:56.041139    8536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2515031s)
	I0610 12:33:56.053185    8536 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:33:56.061684    8536 command_runner.go:130] > NAME=Buildroot
	I0610 12:33:56.062022    8536 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:33:56.062022    8536 command_runner.go:130] > ID=buildroot
	I0610 12:33:56.062022    8536 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:33:56.062022    8536 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:33:56.062022    8536 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:33:56.062142    8536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:33:56.062410    8536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:33:56.063328    8536 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:33:56.063422    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:33:56.077388    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:33:56.100559    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:33:56.154264    8536 start.go:296] duration metric: took 5.3776908s for postStartSetup
	I0610 12:33:56.154361    8536 fix.go:56] duration metric: took 1m36.4021859s for fixHost
	I0610 12:33:56.154361    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:58.515535    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:58.515535    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:58.515535    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:01.349664    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:01.350032    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:01.356578    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:34:01.357362    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:34:01.357362    8536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 12:34:01.498897    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718022841.497997726
	
	I0610 12:34:01.498897    8536 fix.go:216] guest clock: 1718022841.497997726
	I0610 12:34:01.498897    8536 fix.go:229] Guest: 2024-06-10 12:34:01.497997726 +0000 UTC Remote: 2024-06-10 12:33:56.1543615 +0000 UTC m=+317.671170201 (delta=5.343636226s)
	I0610 12:34:01.498988    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:03.837377    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:03.837377    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:03.837941    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:06.688872    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:06.688872    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:06.695433    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:34:06.695433    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:34:06.695433    8536 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718022841
	I0610 12:34:06.846091    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:34:01 UTC 2024
	
	I0610 12:34:06.846091    8536 fix.go:236] clock set: Mon Jun 10 12:34:01 UTC 2024
	 (err=<nil>)
	I0610 12:34:06.846091    8536 start.go:83] releasing machines lock for "multinode-813300-m02", held for 1m47.0938302s
	I0610 12:34:06.847138    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:09.184866    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:09.184866    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:09.184992    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:12.023414    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:12.023414    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:12.026375    8536 out.go:177] * Found network options:
	I0610 12:34:12.029510    8536 out.go:177]   - NO_PROXY=172.17.150.144
	W0610 12:34:12.032010    8536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:34:12.038192    8536 out.go:177]   - NO_PROXY=172.17.150.144
	W0610 12:34:12.040219    8536 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 12:34:12.042464    8536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:34:12.044408    8536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:34:12.044408    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:12.056541    8536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:34:12.056541    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:14.462480    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:14.462480    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:14.462480    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:14.468962    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:14.468962    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:14.468962    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:17.381494    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:17.381561    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:17.381901    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:34:17.425372    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:17.425372    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:17.425788    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:34:17.486772    8536 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0610 12:34:17.487116    8536 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.4305314s)
	W0610 12:34:17.487116    8536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:34:17.499691    8536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:34:17.567104    8536 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:34:17.567104    8536 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5226517s)
	I0610 12:34:17.567104    8536 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:34:17.567104    8536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:34:17.567104    8536 start.go:494] detecting cgroup driver to use...
	I0610 12:34:17.567104    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:34:17.608087    8536 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:34:17.621002    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:34:17.663612    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:34:17.691254    8536 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:34:17.702818    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:34:17.745521    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:34:17.778673    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:34:17.811125    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:34:17.847693    8536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:34:17.883755    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:34:17.919056    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:34:17.954882    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:34:17.988734    8536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:34:18.006989    8536 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:34:18.020120    8536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:34:18.052391    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:18.284080    8536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:34:18.319139    8536 start.go:494] detecting cgroup driver to use...
	I0610 12:34:18.332130    8536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:34:18.364311    8536 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:34:18.364311    8536 command_runner.go:130] > [Unit]
	I0610 12:34:18.364311    8536 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:34:18.364311    8536 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:34:18.364311    8536 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:34:18.364311    8536 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:34:18.364311    8536 command_runner.go:130] > StartLimitBurst=3
	I0610 12:34:18.364311    8536 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:34:18.364311    8536 command_runner.go:130] > [Service]
	I0610 12:34:18.364311    8536 command_runner.go:130] > Type=notify
	I0610 12:34:18.364311    8536 command_runner.go:130] > Restart=on-failure
	I0610 12:34:18.364311    8536 command_runner.go:130] > Environment=NO_PROXY=172.17.150.144
	I0610 12:34:18.364311    8536 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:34:18.364311    8536 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:34:18.364311    8536 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:34:18.364311    8536 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:34:18.364311    8536 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:34:18.364311    8536 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:34:18.364311    8536 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:34:18.364311    8536 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:34:18.364311    8536 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:34:18.364311    8536 command_runner.go:130] > ExecStart=
	I0610 12:34:18.364311    8536 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:34:18.364311    8536 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:34:18.364311    8536 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:34:18.364311    8536 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:34:18.364311    8536 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:34:18.364311    8536 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:34:18.364311    8536 command_runner.go:130] > LimitCORE=infinity
	I0610 12:34:18.364311    8536 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:34:18.364311    8536 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:34:18.364311    8536 command_runner.go:130] > TasksMax=infinity
	I0610 12:34:18.364311    8536 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:34:18.364311    8536 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:34:18.364900    8536 command_runner.go:130] > Delegate=yes
	I0610 12:34:18.364931    8536 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:34:18.364931    8536 command_runner.go:130] > KillMode=process
	I0610 12:34:18.364931    8536 command_runner.go:130] > [Install]
	I0610 12:34:18.364931    8536 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:34:18.382511    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:34:18.417650    8536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:34:18.473526    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:34:18.515086    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:34:18.555200    8536 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:34:18.625294    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:34:18.655900    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:34:18.698130    8536 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:34:18.714342    8536 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:34:18.722125    8536 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:34:18.737268    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:34:18.758142    8536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:34:18.812211    8536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:34:19.023731    8536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:34:19.213536    8536 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:34:19.213536    8536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:34:19.261285    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:19.475193    8536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:34:22.127243    8536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.651951s)
	I0610 12:34:22.142001    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:34:22.181718    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:34:22.221910    8536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:34:22.451618    8536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:34:22.670927    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:22.914816    8536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:34:22.962787    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:34:23.005628    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:23.236422    8536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:34:23.373390    8536 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:34:23.389305    8536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:34:23.397858    8536 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:34:23.397996    8536 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:34:23.397996    8536 command_runner.go:130] > Device: 0,22	Inode: 853         Links: 1
	I0610 12:34:23.397996    8536 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:34:23.397996    8536 command_runner.go:130] > Access: 2024-06-10 12:34:23.267625237 +0000
	I0610 12:34:23.397996    8536 command_runner.go:130] > Modify: 2024-06-10 12:34:23.267625237 +0000
	I0610 12:34:23.397996    8536 command_runner.go:130] > Change: 2024-06-10 12:34:23.275625184 +0000
	I0610 12:34:23.397996    8536 command_runner.go:130] >  Birth: -
	I0610 12:34:23.397996    8536 start.go:562] Will wait 60s for crictl version
	I0610 12:34:23.411009    8536 ssh_runner.go:195] Run: which crictl
	I0610 12:34:23.417900    8536 command_runner.go:130] > /usr/bin/crictl
	I0610 12:34:23.428590    8536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:34:23.500952    8536 command_runner.go:130] > Version:  0.1.0
	I0610 12:34:23.501055    8536 command_runner.go:130] > RuntimeName:  docker
	I0610 12:34:23.501055    8536 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:34:23.501055    8536 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:34:23.501135    8536 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:34:23.511418    8536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:34:23.555358    8536 command_runner.go:130] > 26.1.4
	I0610 12:34:23.567022    8536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:34:23.602503    8536 command_runner.go:130] > 26.1.4
	I0610 12:34:23.607275    8536 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:34:23.610213    8536 out.go:177]   - env NO_PROXY=172.17.150.144
	I0610 12:34:23.612242    8536 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:34:23.616240    8536 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:34:23.616240    8536 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:34:23.616240    8536 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:34:23.616240    8536 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:34:23.619242    8536 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:34:23.619242    8536 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:34:23.630197    8536 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:34:23.639089    8536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:34:23.663035    8536 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:34:23.667572    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:34:23.668475    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:34:25.989998    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:25.990332    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:25.990332    8536 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:34:25.991117    8536 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.144.123
	I0610 12:34:25.991117    8536 certs.go:194] generating shared ca certs ...
	I0610 12:34:25.991249    8536 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:34:25.991946    8536 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:34:25.992378    8536 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:34:25.992568    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:34:25.992568    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:34:25.992568    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:34:25.993109    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:34:25.993659    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:34:25.993988    8536 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:34:25.994091    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:34:25.994434    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:34:25.994776    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:34:25.995087    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:34:25.995687    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:34:25.995813    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:34:25.996099    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:25.996334    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:34:25.996611    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:34:26.061140    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:34:26.114878    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:34:26.165135    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:34:26.221217    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:34:26.272772    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:34:26.325499    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:34:26.389280    8536 ssh_runner.go:195] Run: openssl version
	I0610 12:34:26.399370    8536 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:34:26.411296    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:34:26.446983    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:34:26.453896    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:34:26.454076    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:34:26.465350    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:34:26.473541    8536 command_runner.go:130] > 3ec20f2e
	I0610 12:34:26.484158    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:34:26.521022    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:34:26.556900    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:26.565030    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:26.565030    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:26.575611    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:26.585552    8536 command_runner.go:130] > b5213941
	I0610 12:34:26.598017    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:34:26.631815    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:34:26.666237    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:34:26.673134    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:34:26.673318    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:34:26.685683    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:34:26.693635    8536 command_runner.go:130] > 51391683
	I0610 12:34:26.705414    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:34:26.742680    8536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:34:26.750860    8536 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:34:26.750860    8536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:34:26.750860    8536 kubeadm.go:928] updating node {m02 172.17.144.123 8443 v1.30.1 docker false true} ...
	I0610 12:34:26.751533    8536 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.144.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:34:26.764383    8536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:34:26.792261    8536 command_runner.go:130] > kubeadm
	I0610 12:34:26.792909    8536 command_runner.go:130] > kubectl
	I0610 12:34:26.792909    8536 command_runner.go:130] > kubelet
	I0610 12:34:26.792909    8536 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 12:34:26.804796    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0610 12:34:26.829209    8536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0610 12:34:26.862757    8536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:34:26.911979    8536 ssh_runner.go:195] Run: grep 172.17.150.144	control-plane.minikube.internal$ /etc/hosts
	I0610 12:34:26.919006    8536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.150.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:34:26.956971    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:27.176894    8536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:34:27.209587    8536 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:34:27.210435    8536 start.go:316] joinCluster: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.150.144 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.144.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:34:27.210597    8536 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.17.144.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:34:27.210663    8536 host.go:66] Checking if "multinode-813300-m02" exists ...
	I0610 12:34:27.211184    8536 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:34:27.211850    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:34:27.212425    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:34:29.561140    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:29.561203    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:29.561203    8536 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:34:29.561795    8536 api_server.go:166] Checking apiserver status ...
	I0610 12:34:29.572789    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:34:29.572789    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:34:31.904733    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:31.904733    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:31.904891    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:34.690377    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:34:34.690377    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:34.691371    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:34:34.816356    8536 command_runner.go:130] > 1892
	I0610 12:34:34.816356    8536 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.2435246s)
	I0610 12:34:34.829971    8536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1892/cgroup
	W0610 12:34:34.850979    8536 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1892/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 12:34:34.863836    8536 ssh_runner.go:195] Run: ls
	I0610 12:34:34.871466    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:34:34.878746    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 200:
	ok
	I0610 12:34:34.891725    8536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-813300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0610 12:34:35.074735    8536 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-r4nfq, kube-system/kube-proxy-rx2b2
	I0610 12:34:38.105435    8536 command_runner.go:130] > node/multinode-813300-m02 cordoned
	I0610 12:34:38.105435    8536 command_runner.go:130] > pod "busybox-fc5497c4f-czxmt" has DeletionTimestamp older than 1 seconds, skipping
	I0610 12:34:38.105435    8536 command_runner.go:130] > node/multinode-813300-m02 drained
	I0610 12:34:38.105626    8536 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-813300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2136847s)
	I0610 12:34:38.105626    8536 node.go:128] successfully drained node "multinode-813300-m02"
	I0610 12:34:38.105626    8536 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0610 12:34:38.105744    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:40.442810    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:40.443106    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:40.443106    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:43.281170    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:43.281170    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:43.282609    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-813300" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-813300
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-813300: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-813300" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-813300	172.17.159.171
multinode-813300-m02	172.17.151.128
multinode-813300-m03	172.17.144.46

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-813300 -n multinode-813300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-813300 -n multinode-813300: (13.0122483s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-813300 logs -n 25: (12.3043972s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-314000 ssh -- ls                    | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:03 UTC | 10 Jun 24 12:04 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-314000                           | mount-start-2-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:04 UTC |
	| delete  | -p mount-start-1-314000                           | mount-start-1-314000 | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:04 UTC |
	| start   | -p multinode-813300                               | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:04 UTC | 10 Jun 24 12:11 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- apply -f                   | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- rollout                    | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- get pods -o                | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-czxmt                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC |                     |
	|         | busybox-fc5497c4f-czxmt -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC | 10 Jun 24 12:12 UTC |
	|         | busybox-fc5497c4f-z28tq                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-813300 -- exec                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:12 UTC |                     |
	|         | busybox-fc5497c4f-z28tq -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.144.1                         |                      |                   |         |                     |                     |
	| node    | add -p multinode-813300 -v 3                      | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:13 UTC |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	| node    | multinode-813300 node stop m03                    | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:19 UTC | 10 Jun 24 12:20 UTC |
	| node    | multinode-813300 node start                       | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:21 UTC | 10 Jun 24 12:26 UTC |
	|         | m03 -v=7 --alsologtostderr                        |                      |                   |         |                     |                     |
	| node    | list -p multinode-813300                          | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:26 UTC |                     |
	| stop    | -p multinode-813300                               | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:26 UTC | 10 Jun 24 12:28 UTC |
	| start   | -p multinode-813300                               | multinode-813300     | minikube6\jenkins | v1.33.1 | 10 Jun 24 12:28 UTC |                     |
	|         | --wait=true -v=8                                  |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 12:28:38
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 12:28:38.654839    8536 out.go:291] Setting OutFile to fd 604 ...
	I0610 12:28:38.654983    8536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:28:38.654983    8536 out.go:304] Setting ErrFile to fd 880...
	I0610 12:28:38.654983    8536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:28:38.677325    8536 out.go:298] Setting JSON to false
	I0610 12:28:38.680796    8536 start.go:129] hostinfo: {"hostname":"minikube6","uptime":22407,"bootTime":1718000111,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 12:28:38.680796    8536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 12:28:38.877736    8536 out.go:177] * [multinode-813300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 12:28:38.892532    8536 notify.go:220] Checking for updates...
	I0610 12:28:38.906740    8536 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:28:38.929681    8536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:28:38.940798    8536 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 12:28:39.019798    8536 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:28:39.117032    8536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:28:39.164958    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:28:39.165743    8536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:28:45.223549    8536 out.go:177] * Using the hyperv driver based on existing profile
	I0610 12:28:45.237414    8536 start.go:297] selected driver: hyperv
	I0610 12:28:45.237414    8536 start.go:901] validating driver "hyperv" against &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:28:45.238193    8536 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:28:45.295122    8536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:28:45.295122    8536 cni.go:84] Creating CNI manager for ""
	I0610 12:28:45.295122    8536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 12:28:45.295122    8536 start.go:340] cluster config:
	{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.159.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:28:45.296067    8536 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:28:45.377354    8536 out.go:177] * Starting "multinode-813300" primary control-plane node in "multinode-813300" cluster
	I0610 12:28:45.415578    8536 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:28:45.416310    8536 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 12:28:45.416389    8536 cache.go:56] Caching tarball of preloaded images
	I0610 12:28:45.416765    8536 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:28:45.417002    8536 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:28:45.417351    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:28:45.420305    8536 start.go:360] acquireMachinesLock for multinode-813300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:28:45.420305    8536 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300"
	I0610 12:28:45.420305    8536 start.go:96] Skipping create...Using existing machine configuration
	I0610 12:28:45.420831    8536 fix.go:54] fixHost starting: 
	I0610 12:28:45.421427    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:28:48.413842    8536 main.go:141] libmachine: [stdout =====>] : Off
	
	I0610 12:28:48.413842    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:28:48.413933    8536 fix.go:112] recreateIfNeeded on multinode-813300: state=Stopped err=<nil>
	W0610 12:28:48.413933    8536 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 12:28:48.416868    8536 out.go:177] * Restarting existing hyperv VM for "multinode-813300" ...
	I0610 12:28:48.420782    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300
	I0610 12:28:51.713723    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:28:51.714356    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:28:51.714356    8536 main.go:141] libmachine: Waiting for host to start...
	I0610 12:28:51.714356    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:28:54.118878    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:28:54.119411    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:28:54.119503    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:28:56.814045    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:28:56.814045    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:28:57.822171    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:00.211852    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:00.211852    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:00.212476    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:02.926524    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:29:02.926524    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:03.937598    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:06.275325    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:06.275325    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:06.275325    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:09.010990    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:29:09.010990    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:10.016228    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:12.410508    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:12.410508    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:12.411443    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:15.181346    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:29:15.181346    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:16.183093    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:18.525084    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:18.525150    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:18.525150    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:21.208775    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:21.208775    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:21.211590    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:23.514717    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:23.514717    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:23.515049    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:26.239801    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:26.240812    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:26.241182    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:29:26.244303    8536 machine.go:94] provisionDockerMachine start ...
	I0610 12:29:26.244413    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:28.530608    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:28.530812    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:28.530812    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:31.282690    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:31.284009    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:31.289874    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:29:31.290002    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:29:31.290002    8536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:29:31.435447    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:29:31.435447    8536 buildroot.go:166] provisioning hostname "multinode-813300"
	I0610 12:29:31.435447    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:33.722919    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:33.722970    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:33.722970    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:36.471690    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:36.472334    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:36.479090    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:29:36.479791    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:29:36.479791    8536 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300 && echo "multinode-813300" | sudo tee /etc/hostname
	I0610 12:29:36.652382    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300
	
	I0610 12:29:36.652514    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:38.983413    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:38.983600    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:38.983600    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:41.749950    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:41.750776    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:41.756940    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:29:41.757629    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:29:41.757629    8536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:29:41.917797    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:29:41.917797    8536 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:29:41.917797    8536 buildroot.go:174] setting up certificates
	I0610 12:29:41.917797    8536 provision.go:84] configureAuth start
	I0610 12:29:41.917797    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:44.213749    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:44.214100    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:44.214282    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:46.967042    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:46.967471    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:46.967471    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:49.312432    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:49.312544    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:49.312651    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:52.090532    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:52.090726    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:52.090726    8536 provision.go:143] copyHostCerts
	I0610 12:29:52.090950    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:29:52.091273    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:29:52.091273    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:29:52.091850    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:29:52.092736    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:29:52.093283    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:29:52.093283    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:29:52.093705    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:29:52.094721    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:29:52.094998    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:29:52.095097    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:29:52.095432    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:29:52.096118    8536 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300 san=[127.0.0.1 172.17.150.144 localhost minikube multinode-813300]
	I0610 12:29:52.185188    8536 provision.go:177] copyRemoteCerts
	I0610 12:29:52.203551    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:29:52.203551    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:54.528062    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:54.528062    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:54.528376    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:29:57.219889    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:29:57.219889    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:57.221301    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:29:57.334411    8536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1308185s)
	I0610 12:29:57.334411    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:29:57.335128    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:29:57.388855    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:29:57.389417    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 12:29:57.440865    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:29:57.440865    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 12:29:57.485942    8536 provision.go:87] duration metric: took 15.5680194s to configureAuth
	I0610 12:29:57.485942    8536 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:29:57.486840    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:29:57.486978    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:29:59.788145    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:29:59.788186    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:29:59.788282    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:02.552883    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:02.552883    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:02.558354    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:02.558354    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:02.558940    8536 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:30:02.696563    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:30:02.696563    8536 buildroot.go:70] root file system type: tmpfs
	I0610 12:30:02.696831    8536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:30:02.696831    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:04.985348    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:04.986116    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:04.986116    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:07.764990    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:07.764990    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:07.771821    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:07.772272    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:07.772416    8536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:30:07.947905    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:30:07.947905    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:10.229229    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:10.229229    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:10.229735    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:12.986954    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:12.986954    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:12.993556    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:12.994271    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:12.994271    8536 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:30:15.629392    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:30:15.629510    8536 machine.go:97] duration metric: took 49.3846172s to provisionDockerMachine
	I0610 12:30:15.629551    8536 start.go:293] postStartSetup for "multinode-813300" (driver="hyperv")
	I0610 12:30:15.629551    8536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:30:15.643606    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:30:15.643606    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:17.924737    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:17.924737    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:17.925039    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:20.737689    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:20.737689    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:20.738451    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:30:20.861148    8536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2174997s)
	I0610 12:30:20.878070    8536 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:30:20.886140    8536 command_runner.go:130] > NAME=Buildroot
	I0610 12:30:20.886261    8536 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:30:20.886261    8536 command_runner.go:130] > ID=buildroot
	I0610 12:30:20.886261    8536 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:30:20.886261    8536 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:30:20.886261    8536 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:30:20.886261    8536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:30:20.886912    8536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:30:20.887780    8536 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:30:20.887780    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:30:20.901192    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:30:20.919463    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:30:20.970028    8536 start.go:296] duration metric: took 5.3404341s for postStartSetup
	I0610 12:30:20.970028    8536 fix.go:56] duration metric: took 1m35.5489487s for fixHost
	I0610 12:30:20.970028    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:23.358856    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:23.358921    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:23.358921    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:26.123102    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:26.123102    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:26.130849    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:26.131005    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:26.131005    8536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 12:30:26.270831    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718022626.258816297
	
	I0610 12:30:26.270974    8536 fix.go:216] guest clock: 1718022626.258816297
	I0610 12:30:26.270974    8536 fix.go:229] Guest: 2024-06-10 12:30:26.258816297 +0000 UTC Remote: 2024-06-10 12:30:20.9700283 +0000 UTC m=+102.488567101 (delta=5.288787997s)
	I0610 12:30:26.271118    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:28.609922    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:28.610596    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:28.610596    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:31.337885    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:31.337885    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:31.346928    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:30:31.346928    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.150.144 22 <nil> <nil>}
	I0610 12:30:31.346928    8536 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718022626
	I0610 12:30:31.500608    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:30:26 UTC 2024
	
	I0610 12:30:31.500691    8536 fix.go:236] clock set: Mon Jun 10 12:30:26 UTC 2024
	 (err=<nil>)
	I0610 12:30:31.500691    8536 start.go:83] releasing machines lock for "multinode-813300", held for 1m46.0795262s
	I0610 12:30:31.501016    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:33.776460    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:33.777056    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:33.777056    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:36.554030    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:36.554635    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:36.559082    8536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:30:36.559240    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:36.570714    8536 ssh_runner.go:195] Run: cat /version.json
	I0610 12:30:36.570714    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:30:38.925758    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:38.925758    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:38.926098    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:38.926198    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:30:38.926198    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:38.926198    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:30:41.773204    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:41.773400    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:41.773400    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:30:41.799540    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:30:41.799651    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:30:41.800007    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:30:41.872338    8536 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 12:30:41.872539    8536 ssh_runner.go:235] Completed: cat /version.json: (5.3017825s)
	I0610 12:30:41.885396    8536 ssh_runner.go:195] Run: systemctl --version
	I0610 12:30:42.101945    8536 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:30:42.103122    8536 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5428188s)
	I0610 12:30:42.103159    8536 command_runner.go:130] > systemd 252 (252)
	I0610 12:30:42.103303    8536 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 12:30:42.114776    8536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:30:42.123977    8536 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 12:30:42.124798    8536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:30:42.136387    8536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:30:42.165177    8536 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:30:42.165177    8536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:30:42.165320    8536 start.go:494] detecting cgroup driver to use...
	I0610 12:30:42.165521    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:30:42.212062    8536 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:30:42.226437    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:30:42.258211    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:30:42.278902    8536 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:30:42.289535    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:30:42.323665    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:30:42.355027    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:30:42.386171    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:30:42.423508    8536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:30:42.464119    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:30:42.497561    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:30:42.529363    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:30:42.559375    8536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:30:42.578798    8536 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:30:42.589359    8536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:30:42.619653    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:42.830921    8536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:30:42.862669    8536 start.go:494] detecting cgroup driver to use...
	I0610 12:30:42.874483    8536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:30:42.899477    8536 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:30:42.899477    8536 command_runner.go:130] > [Unit]
	I0610 12:30:42.899846    8536 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:30:42.899846    8536 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:30:42.899846    8536 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:30:42.899846    8536 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:30:42.899846    8536 command_runner.go:130] > StartLimitBurst=3
	I0610 12:30:42.899846    8536 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:30:42.899846    8536 command_runner.go:130] > [Service]
	I0610 12:30:42.899846    8536 command_runner.go:130] > Type=notify
	I0610 12:30:42.899846    8536 command_runner.go:130] > Restart=on-failure
	I0610 12:30:42.899846    8536 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:30:42.899983    8536 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:30:42.899983    8536 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:30:42.899983    8536 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:30:42.899983    8536 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:30:42.900028    8536 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:30:42.900028    8536 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:30:42.900068    8536 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:30:42.900091    8536 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:30:42.900091    8536 command_runner.go:130] > ExecStart=
	I0610 12:30:42.900091    8536 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:30:42.900091    8536 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:30:42.900164    8536 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:30:42.900190    8536 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:30:42.900190    8536 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:30:42.900220    8536 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:30:42.900220    8536 command_runner.go:130] > LimitCORE=infinity
	I0610 12:30:42.900220    8536 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:30:42.900220    8536 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:30:42.900220    8536 command_runner.go:130] > TasksMax=infinity
	I0610 12:30:42.900220    8536 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:30:42.900220    8536 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:30:42.900220    8536 command_runner.go:130] > Delegate=yes
	I0610 12:30:42.900220    8536 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:30:42.900220    8536 command_runner.go:130] > KillMode=process
	I0610 12:30:42.900220    8536 command_runner.go:130] > [Install]
	I0610 12:30:42.900220    8536 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:30:42.914316    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:30:42.958298    8536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:30:43.008354    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:30:43.046473    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:30:43.085725    8536 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:30:43.163345    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:30:43.192848    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:30:43.236715    8536 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:30:43.248701    8536 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:30:43.254691    8536 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:30:43.272660    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:30:43.293585    8536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:30:43.346468    8536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:30:43.587661    8536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:30:43.790758    8536 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:30:43.791070    8536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:30:43.841161    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:44.070472    8536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:30:46.791330    8536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7205702s)
	I0610 12:30:46.803685    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:30:46.840565    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:30:46.877595    8536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:30:47.102484    8536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:30:47.324886    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:47.556726    8536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:30:47.597477    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:30:47.633945    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:47.854989    8536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:30:47.967140    8536 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:30:47.982432    8536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:30:47.991114    8536 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:30:47.991114    8536 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:30:47.991114    8536 command_runner.go:130] > Device: 0,22	Inode: 840         Links: 1
	I0610 12:30:47.991114    8536 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:30:47.991114    8536 command_runner.go:130] > Access: 2024-06-10 12:30:47.879784912 +0000
	I0610 12:30:47.991114    8536 command_runner.go:130] > Modify: 2024-06-10 12:30:47.879784912 +0000
	I0610 12:30:47.991114    8536 command_runner.go:130] > Change: 2024-06-10 12:30:47.884785012 +0000
	I0610 12:30:47.991114    8536 command_runner.go:130] >  Birth: -
	I0610 12:30:47.991114    8536 start.go:562] Will wait 60s for crictl version
	I0610 12:30:48.003665    8536 ssh_runner.go:195] Run: which crictl
	I0610 12:30:48.009966    8536 command_runner.go:130] > /usr/bin/crictl
	I0610 12:30:48.021821    8536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:30:48.091336    8536 command_runner.go:130] > Version:  0.1.0
	I0610 12:30:48.091336    8536 command_runner.go:130] > RuntimeName:  docker
	I0610 12:30:48.091336    8536 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:30:48.091336    8536 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:30:48.091336    8536 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:30:48.101403    8536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:30:48.140012    8536 command_runner.go:130] > 26.1.4
	I0610 12:30:48.149987    8536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:30:48.185101    8536 command_runner.go:130] > 26.1.4
	I0610 12:30:48.193254    8536 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:30:48.193254    8536 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:30:48.196260    8536 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:30:48.196260    8536 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:30:48.196260    8536 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:30:48.196260    8536 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:30:48.201426    8536 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:30:48.201426    8536 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:30:48.213676    8536 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:30:48.220961    8536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:30:48.244733    8536 kubeadm.go:877] updating cluster {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.150.144 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 12:30:48.245500    8536 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:30:48.254297    8536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:30:48.284201    8536 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 12:30:48.284874    8536 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 12:30:48.284874    8536 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 12:30:48.284874    8536 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 12:30:48.284974    8536 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 12:30:48.284974    8536 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 12:30:48.284974    8536 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 12:30:48.284974    8536 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 12:30:48.284974    8536 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:30:48.284974    8536 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 12:30:48.285131    8536 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 12:30:48.285131    8536 docker.go:615] Images already preloaded, skipping extraction
	I0610 12:30:48.295523    8536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 12:30:48.327822    8536 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 12:30:48.327822    8536 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:30:48.327822    8536 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 12:30:48.327822    8536 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 12:30:48.327822    8536 cache_images.go:84] Images are preloaded, skipping loading
	I0610 12:30:48.328349    8536 kubeadm.go:928] updating node { 172.17.150.144 8443 v1.30.1 docker true true} ...
	I0610 12:30:48.328393    8536 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.150.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:30:48.336375    8536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 12:30:48.379653    8536 command_runner.go:130] > cgroupfs
	I0610 12:30:48.379653    8536 cni.go:84] Creating CNI manager for ""
	I0610 12:30:48.379653    8536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 12:30:48.379653    8536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 12:30:48.379653    8536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.150.144 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-813300 NodeName:multinode-813300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.150.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.150.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 12:30:48.379653    8536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.150.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-813300"
	  kubeletExtraArgs:
	    node-ip: 172.17.150.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.150.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 12:30:48.393675    8536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:30:48.416114    8536 command_runner.go:130] > kubeadm
	I0610 12:30:48.416114    8536 command_runner.go:130] > kubectl
	I0610 12:30:48.416114    8536 command_runner.go:130] > kubelet
	I0610 12:30:48.416184    8536 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 12:30:48.429880    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 12:30:48.452913    8536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0610 12:30:48.483630    8536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:30:48.517007    8536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0610 12:30:48.570463    8536 ssh_runner.go:195] Run: grep 172.17.150.144	control-plane.minikube.internal$ /etc/hosts
	I0610 12:30:48.577138    8536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.150.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:30:48.611992    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:30:48.834153    8536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:30:48.868245    8536 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.150.144
	I0610 12:30:48.868329    8536 certs.go:194] generating shared ca certs ...
	I0610 12:30:48.868374    8536 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:30:48.869175    8536 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:30:48.869443    8536 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:30:48.869970    8536 certs.go:256] generating profile certs ...
	I0610 12:30:48.870826    8536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\client.key
	I0610 12:30:48.870826    8536 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.18129446
	I0610 12:30:48.870826    8536 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.18129446 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.150.144]
	I0610 12:30:48.967326    8536 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.18129446 ...
	I0610 12:30:48.967326    8536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.18129446: {Name:mk10a39c5392a50c9be23655c99ab50aa79910fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:30:48.969338    8536 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.18129446 ...
	I0610 12:30:48.969338    8536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.18129446: {Name:mk84e846335431ca2dddd39c9c8847a448320834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:30:48.969619    8536 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt.18129446 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt
	I0610 12:30:48.983700    8536 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key.18129446 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key
	I0610 12:30:48.984855    8536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key
	I0610 12:30:48.985403    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:30:48.985496    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:30:48.985496    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:30:48.985496    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:30:48.986120    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 12:30:48.986120    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 12:30:48.986120    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 12:30:48.986654    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 12:30:48.987243    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:30:48.987578    8536 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:30:48.987695    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:30:48.987985    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:30:48.988116    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:30:48.988116    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:30:48.989041    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:30:48.989319    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:30:48.989343    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:30:48.989343    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:48.991127    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:30:49.045283    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:30:49.096175    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:30:49.146219    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:30:49.199394    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 12:30:49.252212    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 12:30:49.304181    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 12:30:49.369323    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 12:30:49.425787    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:30:49.474507    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:30:49.527167    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:30:49.575904    8536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 12:30:49.627713    8536 ssh_runner.go:195] Run: openssl version
	I0610 12:30:49.638196    8536 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:30:49.651705    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:30:49.683246    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:49.690437    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:49.690437    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:49.703965    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:30:49.713866    8536 command_runner.go:130] > b5213941
	I0610 12:30:49.725992    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:30:49.758905    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:30:49.790270    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:30:49.800463    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:30:49.800608    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:30:49.815877    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:30:49.826959    8536 command_runner.go:130] > 51391683
	I0610 12:30:49.839053    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:30:49.870738    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:30:49.910794    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:30:49.923102    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:30:49.923102    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:30:49.935320    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:30:49.945061    8536 command_runner.go:130] > 3ec20f2e
	I0610 12:30:49.957426    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:30:49.993286    8536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:30:50.004954    8536 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:30:50.005028    8536 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0610 12:30:50.005028    8536 command_runner.go:130] > Device: 8,1	Inode: 5243218     Links: 1
	I0610 12:30:50.005028    8536 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 12:30:50.005028    8536 command_runner.go:130] > Access: 2024-06-10 12:07:48.567870685 +0000
	I0610 12:30:50.005211    8536 command_runner.go:130] > Modify: 2024-06-10 12:07:48.567870685 +0000
	I0610 12:30:50.005280    8536 command_runner.go:130] > Change: 2024-06-10 12:07:48.567870685 +0000
	I0610 12:30:50.005280    8536 command_runner.go:130] >  Birth: 2024-06-10 12:07:48.567870685 +0000
	I0610 12:30:50.019875    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 12:30:50.030991    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.043975    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 12:30:50.060811    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.071748    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 12:30:50.084717    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.095710    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 12:30:50.105170    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.116979    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 12:30:50.126077    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.138413    8536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 12:30:50.147367    8536 command_runner.go:130] > Certificate will not expire
	I0610 12:30:50.147929    8536 kubeadm.go:391] StartCluster: {Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.150.144 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.151.128 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:30:50.156587    8536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 12:30:50.190885    8536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 12:30:50.208685    8536 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0610 12:30:50.208912    8536 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0610 12:30:50.208912    8536 command_runner.go:130] > /var/lib/minikube/etcd:
	I0610 12:30:50.208912    8536 command_runner.go:130] > member
	W0610 12:30:50.208912    8536 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 12:30:50.208912    8536 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 12:30:50.208912    8536 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 12:30:50.221129    8536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 12:30:50.246391    8536 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 12:30:50.247783    8536 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-813300" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:30:50.248308    8536 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-813300" cluster setting kubeconfig missing "multinode-813300" context setting]
	I0610 12:30:50.249269    8536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:30:50.264266    8536 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:30:50.265683    8536 kapi.go:59] client config for multinode-813300: &rest.Config{Host:"https://172.17.150.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-813300/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfe1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 12:30:50.267447    8536 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 12:30:50.279479    8536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 12:30:50.299920    8536 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0610 12:30:50.299983    8536 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0610 12:30:50.299983    8536 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0610 12:30:50.300044    8536 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0610 12:30:50.300079    8536 command_runner.go:130] >  kind: InitConfiguration
	I0610 12:30:50.300079    8536 command_runner.go:130] >  localAPIEndpoint:
	I0610 12:30:50.300079    8536 command_runner.go:130] > -  advertiseAddress: 172.17.159.171
	I0610 12:30:50.300079    8536 command_runner.go:130] > +  advertiseAddress: 172.17.150.144
	I0610 12:30:50.300135    8536 command_runner.go:130] >    bindPort: 8443
	I0610 12:30:50.300135    8536 command_runner.go:130] >  bootstrapTokens:
	I0610 12:30:50.300160    8536 command_runner.go:130] >    - groups:
	I0610 12:30:50.300160    8536 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0610 12:30:50.300160    8536 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0610 12:30:50.300160    8536 command_runner.go:130] >    name: "multinode-813300"
	I0610 12:30:50.300238    8536 command_runner.go:130] >    kubeletExtraArgs:
	I0610 12:30:50.300238    8536 command_runner.go:130] > -    node-ip: 172.17.159.171
	I0610 12:30:50.300238    8536 command_runner.go:130] > +    node-ip: 172.17.150.144
	I0610 12:30:50.300238    8536 command_runner.go:130] >    taints: []
	I0610 12:30:50.300238    8536 command_runner.go:130] >  ---
	I0610 12:30:50.300339    8536 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0610 12:30:50.300339    8536 command_runner.go:130] >  kind: ClusterConfiguration
	I0610 12:30:50.300339    8536 command_runner.go:130] >  apiServer:
	I0610 12:30:50.300339    8536 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.159.171"]
	I0610 12:30:50.300339    8536 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.150.144"]
	I0610 12:30:50.300423    8536 command_runner.go:130] >    extraArgs:
	I0610 12:30:50.300450    8536 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0610 12:30:50.300450    8536 command_runner.go:130] >  controllerManager:
	I0610 12:30:50.300450    8536 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.159.171
	+  advertiseAddress: 172.17.150.144
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-813300"
	   kubeletExtraArgs:
	-    node-ip: 172.17.159.171
	+    node-ip: 172.17.150.144
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.159.171"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.150.144"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0610 12:30:50.300450    8536 kubeadm.go:1154] stopping kube-system containers ...
	I0610 12:30:50.308031    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 12:30:50.337037    8536 command_runner.go:130] > f2e39052db19
	I0610 12:30:50.337649    8536 command_runner.go:130] > d32ce22e31b0
	I0610 12:30:50.337649    8536 command_runner.go:130] > a0bc6043f7b9
	I0610 12:30:50.337649    8536 command_runner.go:130] > a1ae7aed0067
	I0610 12:30:50.337649    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:30:50.337649    8536 command_runner.go:130] > afad8b05897e
	I0610 12:30:50.337649    8536 command_runner.go:130] > 689b8976cc02
	I0610 12:30:50.337649    8536 command_runner.go:130] > 62db1c721951
	I0610 12:30:50.337649    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:30:50.337649    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:30:50.337649    8536 command_runner.go:130] > 34b9299d74e3
	I0610 12:30:50.337649    8536 command_runner.go:130] > ba52603f8387
	I0610 12:30:50.337649    8536 command_runner.go:130] > f04d7b3d4fcc
	I0610 12:30:50.337649    8536 command_runner.go:130] > c7d28a97ba1c
	I0610 12:30:50.337649    8536 command_runner.go:130] > e3b6aa9a0e1d
	I0610 12:30:50.337649    8536 command_runner.go:130] > a10e49596de5
	I0610 12:30:50.339006    8536 docker.go:483] Stopping containers: [f2e39052db19 d32ce22e31b0 a0bc6043f7b9 a1ae7aed0067 c39d54960e7d afad8b05897e 689b8976cc02 62db1c721951 bd1a6cd98743 f1409bf44ff1 34b9299d74e3 ba52603f8387 f04d7b3d4fcc c7d28a97ba1c e3b6aa9a0e1d a10e49596de5]
	I0610 12:30:50.350377    8536 ssh_runner.go:195] Run: docker stop f2e39052db19 d32ce22e31b0 a0bc6043f7b9 a1ae7aed0067 c39d54960e7d afad8b05897e 689b8976cc02 62db1c721951 bd1a6cd98743 f1409bf44ff1 34b9299d74e3 ba52603f8387 f04d7b3d4fcc c7d28a97ba1c e3b6aa9a0e1d a10e49596de5
	I0610 12:30:50.383440    8536 command_runner.go:130] > f2e39052db19
	I0610 12:30:50.383440    8536 command_runner.go:130] > d32ce22e31b0
	I0610 12:30:50.383440    8536 command_runner.go:130] > a0bc6043f7b9
	I0610 12:30:50.383440    8536 command_runner.go:130] > a1ae7aed0067
	I0610 12:30:50.383440    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:30:50.383440    8536 command_runner.go:130] > afad8b05897e
	I0610 12:30:50.383440    8536 command_runner.go:130] > 689b8976cc02
	I0610 12:30:50.383440    8536 command_runner.go:130] > 62db1c721951
	I0610 12:30:50.383440    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:30:50.383440    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:30:50.383440    8536 command_runner.go:130] > 34b9299d74e3
	I0610 12:30:50.383440    8536 command_runner.go:130] > ba52603f8387
	I0610 12:30:50.383440    8536 command_runner.go:130] > f04d7b3d4fcc
	I0610 12:30:50.383440    8536 command_runner.go:130] > c7d28a97ba1c
	I0610 12:30:50.383440    8536 command_runner.go:130] > e3b6aa9a0e1d
	I0610 12:30:50.383440    8536 command_runner.go:130] > a10e49596de5
	I0610 12:30:50.397012    8536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 12:30:50.443003    8536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 12:30:50.463699    8536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 12:30:50.463860    8536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 12:30:50.463860    8536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 12:30:50.463860    8536 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:30:50.464086    8536 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:30:50.464167    8536 kubeadm.go:156] found existing configuration files:
	
	I0610 12:30:50.477350    8536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 12:30:50.496838    8536 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:30:50.496838    8536 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:30:50.507829    8536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 12:30:50.548835    8536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 12:30:50.568660    8536 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:30:50.568660    8536 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:30:50.580851    8536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 12:30:50.611996    8536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 12:30:50.629155    8536 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:30:50.629155    8536 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:30:50.640648    8536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 12:30:50.673025    8536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 12:30:50.689528    8536 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:30:50.690156    8536 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:30:50.701757    8536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 12:30:50.733605    8536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 12:30:50.750642    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 12:30:51.050154    8536 command_runner.go:130] > [certs] Using the existing "sa" key
	I0610 12:30:51.050154    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:53.559657    8536 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:30:53.560937    8536 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:30:53.560937    8536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.5104503s)
	I0610 12:30:53.560937    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:53.676924    8536 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:30:53.679941    8536 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:30:53.680103    8536 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 12:30:53.906932    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:54.006693    8536 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:30:54.006693    8536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:30:54.006693    8536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:30:54.006693    8536 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:30:54.006814    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:30:54.116485    8536 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:30:54.116615    8536 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:30:54.128579    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:54.639320    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:55.147507    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:55.645247    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:56.145320    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:30:56.174980    8536 command_runner.go:130] > 1892
	I0610 12:30:56.175127    8536 api_server.go:72] duration metric: took 2.0584961s to wait for apiserver process to appear ...
	I0610 12:30:56.175220    8536 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:30:56.175332    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:30:59.397470    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 12:30:59.398212    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 12:30:59.398212    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:30:59.485722    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 12:30:59.485722    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 12:30:59.677153    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:30:59.685073    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 12:30:59.685073    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 12:31:00.178702    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:31:00.189602    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 12:31:00.189713    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 12:31:00.685079    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:31:00.693473    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 12:31:00.693473    8536 api_server.go:103] status: https://172.17.150.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 12:31:01.175816    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:31:01.182969    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 200:
	ok
	I0610 12:31:01.182969    8536 round_trippers.go:463] GET https://172.17.150.144:8443/version
	I0610 12:31:01.182969    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:01.182969    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:01.182969    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:01.194421    8536 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 12:31:01.194701    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:01.194701    8536 round_trippers.go:580]     Audit-Id: bdef7251-952d-4176-808e-102f8bc9bca4
	I0610 12:31:01.194701    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:01.194767    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:01.194767    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:01.194810    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:01.194810    8536 round_trippers.go:580]     Content-Length: 263
	I0610 12:31:01.194838    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:01 GMT
	I0610 12:31:01.194914    8536 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 12:31:01.194914    8536 api_server.go:141] control plane version: v1.30.1
	I0610 12:31:01.194914    8536 api_server.go:131] duration metric: took 5.0196532s to wait for apiserver health ...
	I0610 12:31:01.194914    8536 cni.go:84] Creating CNI manager for ""
	I0610 12:31:01.194914    8536 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 12:31:01.198299    8536 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 12:31:01.216425    8536 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 12:31:01.225408    8536 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 12:31:01.225408    8536 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0610 12:31:01.225408    8536 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0610 12:31:01.225408    8536 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 12:31:01.225408    8536 command_runner.go:130] > Access: 2024-06-10 12:29:17.417483400 +0000
	I0610 12:31:01.225408    8536 command_runner.go:130] > Modify: 2024-06-06 15:35:25.000000000 +0000
	I0610 12:31:01.225408    8536 command_runner.go:130] > Change: 2024-06-10 12:29:06.186000000 +0000
	I0610 12:31:01.225408    8536 command_runner.go:130] >  Birth: -
	I0610 12:31:01.226407    8536 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 12:31:01.226407    8536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 12:31:01.303294    8536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 12:31:02.478704    8536 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0610 12:31:02.478841    8536 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0610 12:31:02.478841    8536 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0610 12:31:02.478841    8536 command_runner.go:130] > daemonset.apps/kindnet configured
	I0610 12:31:02.479112    8536 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1758084s)
	I0610 12:31:02.479112    8536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:31:02.479112    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:31:02.479112    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.479112    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.479112    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.485944    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:02.485944    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.485944    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.485944    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.485944    8536 round_trippers.go:580]     Audit-Id: 14fde666-ec61-46cb-bd29-b228dcf0a637
	I0610 12:31:02.485944    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.485944    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.485944    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.487924    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1666"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87830 chars]
	I0610 12:31:02.494908    8536 system_pods.go:59] 12 kube-system pods found
	I0610 12:31:02.494908    8536 system_pods.go:61] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 12:31:02.494908    8536 system_pods.go:61] "etcd-multinode-813300" [f9259e5e-61e9-4252-b7c6-de5d499eb9c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 12:31:02.494908    8536 system_pods.go:61] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0610 12:31:02.494908    8536 system_pods.go:61] "kindnet-2pc4j" [966ce4c1-e9ee-48d6-9e52-98143fa03e67] Running
	I0610 12:31:02.494908    8536 system_pods.go:61] "kindnet-r4nfq" [dceb3d20-8d04-4408-927f-1c195558dd19] Running
	I0610 12:31:02.494908    8536 system_pods.go:61] "kube-apiserver-multinode-813300" [2cf29b2c-a2a9-46ec-bbc8-fe884e97df06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 12:31:02.494908    8536 system_pods.go:61] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 12:31:02.494908    8536 system_pods.go:61] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:31:02.494908    8536 system_pods.go:61] "kube-proxy-rx2b2" [ce59a99b-a561-4598-9399-147f748433a2] Running
	I0610 12:31:02.495900    8536 system_pods.go:61] "kube-proxy-vw56h" [f3f9e738-89d2-4776-a212-a1ca28952f7c] Running
	I0610 12:31:02.495900    8536 system_pods.go:61] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 12:31:02.495900    8536 system_pods.go:61] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:31:02.495900    8536 system_pods.go:74] duration metric: took 16.7882ms to wait for pod list to return data ...
	I0610 12:31:02.495900    8536 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:31:02.495900    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes
	I0610 12:31:02.495900    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.495900    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.495900    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.500905    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:02.500905    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.501060    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.501060    8536 round_trippers.go:580]     Audit-Id: b1f4f287-acba-409f-8a8d-4d6717d703d2
	I0610 12:31:02.501060    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.501060    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.501060    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.501060    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.501836    8536 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1666"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16303 chars]
	I0610 12:31:02.503048    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:31:02.503101    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:31:02.503101    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:31:02.503101    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:31:02.503101    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:31:02.503101    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:31:02.503101    8536 node_conditions.go:105] duration metric: took 7.2006ms to run NodePressure ...
	I0610 12:31:02.503101    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 12:31:02.873571    8536 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 12:31:02.873571    8536 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 12:31:02.873571    8536 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 12:31:02.874581    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0610 12:31:02.874581    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.874581    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.874581    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.879570    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:02.879570    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.879570    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.879570    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.879570    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.879570    8536 round_trippers.go:580]     Audit-Id: 79a391e0-8be4-4aaa-beb0-e00a33e8b2c4
	I0610 12:31:02.879570    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.879570    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.880291    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1668"},"items":[{"metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"f9259e5e-61e9-4252-b7c6-de5d499eb9c1","resourceVersion":"1659","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.150.144:2379","kubernetes.io/config.hash":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.mirror":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.seen":"2024-06-10T12:30:54.120335207Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0610 12:31:02.882044    8536 kubeadm.go:733] kubelet initialised
	I0610 12:31:02.882044    8536 kubeadm.go:734] duration metric: took 8.473ms waiting for restarted kubelet to initialise ...
	I0610 12:31:02.882044    8536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:31:02.882044    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:31:02.882044    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.882044    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.882044    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.887039    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:02.887039    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.887669    8536 round_trippers.go:580]     Audit-Id: 645d917a-adee-47ee-a51a-10c345996109
	I0610 12:31:02.887669    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.887669    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.887669    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.887669    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.887669    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.889974    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1668"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87830 chars]
	I0610 12:31:02.895373    8536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:02.895494    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:02.895494    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.895494    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.895494    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.898800    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:02.898800    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.898800    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.898800    8536 round_trippers.go:580]     Audit-Id: 391001a6-7791-41f1-879f-b91a5ae733fc
	I0610 12:31:02.898800    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.898800    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.898800    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.898800    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.899331    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:02.899508    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:02.899508    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.899508    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.899508    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.902367    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:02.902367    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.902367    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.902367    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.902367    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.902367    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.902367    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.902367    8536 round_trippers.go:580]     Audit-Id: dfa7686e-65f7-4049-8bf9-d729b9f92192
	I0610 12:31:02.903372    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:02.903372    8536 pod_ready.go:97] node "multinode-813300" hosting pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.903372    8536 pod_ready.go:81] duration metric: took 7.9986ms for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:02.903372    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.903372    8536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:02.903372    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:31:02.903372    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.903372    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.903372    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.906413    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:02.906413    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.906413    8536 round_trippers.go:580]     Audit-Id: efd0c92b-050e-4757-994b-e754b554d826
	I0610 12:31:02.906413    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.906413    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.906413    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.906710    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.906710    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.906923    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"f9259e5e-61e9-4252-b7c6-de5d499eb9c1","resourceVersion":"1659","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.150.144:2379","kubernetes.io/config.hash":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.mirror":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.seen":"2024-06-10T12:30:54.120335207Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0610 12:31:02.907621    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:02.907621    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.907621    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.907621    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.910397    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:02.910397    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.910455    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.910455    8536 round_trippers.go:580]     Audit-Id: 8150a705-5a3f-42c3-99d7-c74227871cc0
	I0610 12:31:02.910455    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.910455    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.910455    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.910455    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.910571    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:02.910571    8536 pod_ready.go:97] node "multinode-813300" hosting pod "etcd-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.911095    8536 pod_ready.go:81] duration metric: took 7.1996ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:02.911095    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "etcd-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.911095    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:02.911246    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:31:02.911246    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.911293    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.911293    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.913989    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:02.913989    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.914172    8536 round_trippers.go:580]     Audit-Id: 45056189-f5f9-49cb-bb3a-797c61c8592f
	I0610 12:31:02.914172    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.914172    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.914172    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.914172    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.914172    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.914356    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"2cf29b2c-a2a9-46ec-bbc8-fe884e97df06","resourceVersion":"1655","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.150.144:8443","kubernetes.io/config.hash":"180cf4cc399d604c28cc4df1442ebd5a","kubernetes.io/config.mirror":"180cf4cc399d604c28cc4df1442ebd5a","kubernetes.io/config.seen":"2024-06-10T12:30:54.115839018Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0610 12:31:02.914924    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:02.915022    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.915022    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.915022    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.916930    8536 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 12:31:02.916930    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.916930    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.916930    8536 round_trippers.go:580]     Audit-Id: a410d13c-ec8b-40ab-a942-83be9c65946f
	I0610 12:31:02.916930    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.916930    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.916930    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.916930    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.916930    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:02.917925    8536 pod_ready.go:97] node "multinode-813300" hosting pod "kube-apiserver-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.917925    8536 pod_ready.go:81] duration metric: took 6.8295ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:02.917925    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "kube-apiserver-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.917925    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:02.917925    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:31:02.917925    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.917925    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.917925    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.920935    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:02.921515    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.921515    8536 round_trippers.go:580]     Audit-Id: 05339cf1-4ebb-4088-a220-d700387f99fd
	I0610 12:31:02.921515    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.921515    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.921515    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.921584    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.921584    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.921999    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"1654","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0610 12:31:02.922227    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:02.922227    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:02.922227    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:02.922227    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:02.928116    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:02.928116    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:02.928216    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:02.928216    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:02.928216    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:02.928216    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:02 GMT
	I0610 12:31:02.928216    8536 round_trippers.go:580]     Audit-Id: a74cf69d-2660-457a-bf90-45955074ce7b
	I0610 12:31:02.928216    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:02.928273    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:02.928984    8536 pod_ready.go:97] node "multinode-813300" hosting pod "kube-controller-manager-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.929039    8536 pod_ready.go:81] duration metric: took 11.1144ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:02.929039    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "kube-controller-manager-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:02.929039    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:03.093997    8536 request.go:629] Waited for 164.9567ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:31:03.093997    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:31:03.093997    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.093997    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.093997    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.103604    8536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 12:31:03.103604    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.103604    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.103604    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.103604    8536 round_trippers.go:580]     Audit-Id: aad52ca2-9c19-4c33-83f0-ff570cea1992
	I0610 12:31:03.103604    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.103604    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.103604    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.104857    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"1665","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0610 12:31:03.282628    8536 request.go:629] Waited for 177.0208ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:03.283099    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:03.283099    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.283099    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.283099    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.288630    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:03.289026    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.289063    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.289063    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.289129    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.289129    8536 round_trippers.go:580]     Audit-Id: 596762d1-fe79-4da7-982f-3b1e85edaa26
	I0610 12:31:03.289162    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.289162    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.289408    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:03.289966    8536 pod_ready.go:97] node "multinode-813300" hosting pod "kube-proxy-nrpvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:03.289966    8536 pod_ready.go:81] duration metric: took 360.9241ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:03.289966    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "kube-proxy-nrpvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:03.289966    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:03.484026    8536 request.go:629] Waited for 194.0586ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:31:03.484462    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:31:03.484462    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.484521    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.484547    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.488022    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:03.488022    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.488022    8536 round_trippers.go:580]     Audit-Id: e3e76be0-0f77-481c-bb74-ff01b44ee288
	I0610 12:31:03.489016    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.489016    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.489016    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.489016    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.489016    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.489259    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rx2b2","generateName":"kube-proxy-","namespace":"kube-system","uid":"ce59a99b-a561-4598-9399-147f748433a2","resourceVersion":"1632","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0610 12:31:03.686269    8536 request.go:629] Waited for 196.2702ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:31:03.686482    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:31:03.686482    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.686482    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.686482    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.693405    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:03.693561    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.693561    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.693561    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.693561    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.693561    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.693561    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.693561    8536 round_trippers.go:580]     Audit-Id: 30e43582-bd8a-4f69-8a52-a61f15374c7f
	I0610 12:31:03.693561    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"1628","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4486 chars]
	I0610 12:31:03.694336    8536 pod_ready.go:97] node "multinode-813300-m02" hosting pod "kube-proxy-rx2b2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m02" has status "Ready":"Unknown"
	I0610 12:31:03.694336    8536 pod_ready.go:81] duration metric: took 404.3666ms for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:03.694336    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300-m02" hosting pod "kube-proxy-rx2b2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m02" has status "Ready":"Unknown"
	I0610 12:31:03.694336    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vw56h" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:03.888087    8536 request.go:629] Waited for 193.7498ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vw56h
	I0610 12:31:03.888457    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vw56h
	I0610 12:31:03.888705    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:03.888705    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:03.888705    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:03.892282    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:03.892709    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:03.892709    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:03.892709    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:03.892709    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:03 GMT
	I0610 12:31:03.892709    8536 round_trippers.go:580]     Audit-Id: 736fba8c-c8d1-49ab-9c03-8765cad1c045
	I0610 12:31:03.892709    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:03.892709    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:03.893266    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vw56h","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3f9e738-89d2-4776-a212-a1ca28952f7c","resourceVersion":"1595","creationTimestamp":"2024-06-10T12:25:52Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0610 12:31:04.093768    8536 request.go:629] Waited for 199.281ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m03
	I0610 12:31:04.093889    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m03
	I0610 12:31:04.093889    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:04.094038    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:04.094038    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:04.098761    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:04.098761    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:04.098761    8536 round_trippers.go:580]     Audit-Id: 62398428-3155-4df8-b2fb-6886a46ac3b0
	I0610 12:31:04.098761    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:04.098895    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:04.098895    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:04.098895    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:04.098895    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:04 GMT
	I0610 12:31:04.099264    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m03","uid":"7d0b0b62-45c8-40aa-9f7a-5bb189395355","resourceVersion":"1603","creationTimestamp":"2024-06-10T12:25:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_25_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4413 chars]
	I0610 12:31:04.099721    8536 pod_ready.go:97] node "multinode-813300-m03" hosting pod "kube-proxy-vw56h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m03" has status "Ready":"Unknown"
	I0610 12:31:04.099775    8536 pod_ready.go:81] duration metric: took 405.4353ms for pod "kube-proxy-vw56h" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:04.099775    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300-m03" hosting pod "kube-proxy-vw56h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m03" has status "Ready":"Unknown"
	I0610 12:31:04.099775    8536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:04.283268    8536 request.go:629] Waited for 183.0245ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:31:04.283356    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:31:04.283356    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:04.283356    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:04.283356    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:04.287323    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:04.288299    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:04.288299    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:04.288398    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:04 GMT
	I0610 12:31:04.288514    8536 round_trippers.go:580]     Audit-Id: e0579cbf-42b1-4b5f-9bc6-47a5f77d894f
	I0610 12:31:04.288514    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:04.288514    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:04.288514    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:04.288514    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"1658","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0610 12:31:04.489460    8536 request.go:629] Waited for 200.2037ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:04.489460    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:04.489460    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:04.489460    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:04.489460    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:04.493865    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:04.493865    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:04.493927    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:04.493927    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:04.493927    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:04.493927    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:04.493927    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:04 GMT
	I0610 12:31:04.493927    8536 round_trippers.go:580]     Audit-Id: 758fdf9e-0468-42f1-b687-07d4951d7bfc
	I0610 12:31:04.494276    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:04.494326    8536 pod_ready.go:97] node "multinode-813300" hosting pod "kube-scheduler-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:04.494326    8536 pod_ready.go:81] duration metric: took 394.5484ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	E0610 12:31:04.494326    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300" hosting pod "kube-scheduler-multinode-813300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300" has status "Ready":"False"
	I0610 12:31:04.494326    8536 pod_ready.go:38] duration metric: took 1.6122687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:31:04.494326    8536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:31:04.517070    8536 command_runner.go:130] > -16
	I0610 12:31:04.517070    8536 ops.go:34] apiserver oom_adj: -16
	I0610 12:31:04.517070    8536 kubeadm.go:591] duration metric: took 14.3080415s to restartPrimaryControlPlane
	I0610 12:31:04.517070    8536 kubeadm.go:393] duration metric: took 14.3690238s to StartCluster
	I0610 12:31:04.517070    8536 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:31:04.517070    8536 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:31:04.519181    8536 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:31:04.520611    8536 start.go:234] Will wait 6m0s for node &{Name: IP:172.17.150.144 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 12:31:04.520611    8536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:31:04.529652    8536 out.go:177] * Verifying Kubernetes components...
	I0610 12:31:04.520984    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:31:04.534463    8536 out.go:177] * Enabled addons: 
	I0610 12:31:04.537036    8536 addons.go:510] duration metric: took 16.5027ms for enable addons: enabled=[]
	I0610 12:31:04.545277    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:31:04.828330    8536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:31:04.855843    8536 node_ready.go:35] waiting up to 6m0s for node "multinode-813300" to be "Ready" ...
	I0610 12:31:04.855843    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:04.855843    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:04.855843    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:04.855843    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:04.859860    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:04.859860    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:04.859860    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:04.860582    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:04 GMT
	I0610 12:31:04.860582    8536 round_trippers.go:580]     Audit-Id: 15e53225-994c-4023-a98c-d402e1c3231a
	I0610 12:31:04.860582    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:04.860582    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:04.860582    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:04.860871    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:05.370840    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:05.370840    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:05.370840    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:05.370840    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:05.375459    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:05.375995    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:05.376058    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:05.376058    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:05.376058    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:05 GMT
	I0610 12:31:05.376058    8536 round_trippers.go:580]     Audit-Id: 686b7362-0e64-4b0e-9fde-542a554fb89c
	I0610 12:31:05.376058    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:05.376109    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:05.376933    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:05.857080    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:05.857080    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:05.857080    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:05.857080    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:05.861996    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:05.861996    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:05.862192    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:05.862192    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:05 GMT
	I0610 12:31:05.862192    8536 round_trippers.go:580]     Audit-Id: 6319b36e-4baf-465d-8539-c68d257be543
	I0610 12:31:05.862192    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:05.862249    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:05.862249    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:05.862249    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:06.369456    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:06.369566    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:06.369592    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:06.369592    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:06.377335    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:31:06.377335    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:06.377335    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:06.377335    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:06.377335    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:06 GMT
	I0610 12:31:06.377335    8536 round_trippers.go:580]     Audit-Id: d47d764a-acd3-4949-b4f2-e427230cb069
	I0610 12:31:06.377335    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:06.377335    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:06.378326    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:06.862414    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:06.862414    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:06.862414    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:06.862414    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:06.869332    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:06.869630    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:06.869630    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:06.869630    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:06.869630    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:06.869630    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:06 GMT
	I0610 12:31:06.869630    8536 round_trippers.go:580]     Audit-Id: 2816d510-1ee4-4f86-aeaa-7aa02d72832e
	I0610 12:31:06.869630    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:06.869853    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:06.870312    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:07.364564    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:07.364564    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:07.364564    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:07.364564    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:07.375560    8536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 12:31:07.375560    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:07.376229    8536 round_trippers.go:580]     Audit-Id: 545dd30e-8a0f-429d-8819-d204a09fb4c9
	I0610 12:31:07.376229    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:07.376229    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:07.376229    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:07.376229    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:07.376229    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:07 GMT
	I0610 12:31:07.377998    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:07.865852    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:07.865852    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:07.865852    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:07.865852    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:07.868948    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:07.868948    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:07.868948    8536 round_trippers.go:580]     Audit-Id: 095f6c15-9744-4344-b32e-fcd499f64221
	I0610 12:31:07.868948    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:07.868948    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:07.868948    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:07.868948    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:07.868948    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:07 GMT
	I0610 12:31:07.870138    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:08.367045    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:08.367045    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:08.367149    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:08.367149    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:08.370607    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:08.370607    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:08.370607    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:08.370607    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:08.370607    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:08.370607    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:08 GMT
	I0610 12:31:08.370607    8536 round_trippers.go:580]     Audit-Id: 19823bba-3d76-4486-b4a2-424db46ae187
	I0610 12:31:08.370607    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:08.371532    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:08.869632    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:08.869632    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:08.869632    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:08.869632    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:08.873292    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:08.873292    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:08.874256    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:08.874256    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:08.874256    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:08.874256    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:08 GMT
	I0610 12:31:08.874256    8536 round_trippers.go:580]     Audit-Id: 00776131-78cc-409b-8180-7c752bda2b41
	I0610 12:31:08.874256    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:08.875012    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:08.875328    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:09.356174    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:09.356174    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:09.356174    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:09.356174    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:09.361145    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:09.361145    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:09.361145    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:09.361145    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:09.361145    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:09.361145    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:09 GMT
	I0610 12:31:09.361336    8536 round_trippers.go:580]     Audit-Id: 561a4994-96f1-4dc5-931f-3c662c5d48ad
	I0610 12:31:09.361336    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:09.362202    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:09.870079    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:09.870079    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:09.870079    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:09.870079    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:09.874658    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:09.874771    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:09.874823    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:09.874823    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:09.874823    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:09 GMT
	I0610 12:31:09.874823    8536 round_trippers.go:580]     Audit-Id: c7d43032-7422-4a33-bbd8-f40e797970da
	I0610 12:31:09.874823    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:09.874823    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:09.875070    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:10.371088    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:10.371143    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:10.371143    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:10.371143    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:10.381713    8536 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 12:31:10.381713    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:10.381713    8536 round_trippers.go:580]     Audit-Id: 6b620a72-4b17-4a68-8a6c-178a71c44b69
	I0610 12:31:10.381713    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:10.381713    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:10.381713    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:10.381713    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:10.381713    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:10 GMT
	I0610 12:31:10.381713    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:10.857403    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:10.857403    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:10.857403    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:10.857498    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:10.861794    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:10.861794    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:10.861794    8536 round_trippers.go:580]     Audit-Id: 1114c439-c7d9-4c50-9978-9b78ad5a4366
	I0610 12:31:10.862125    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:10.862125    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:10.862125    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:10.862125    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:10.862125    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:10 GMT
	I0610 12:31:10.862649    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:11.357205    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:11.357455    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:11.357455    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:11.357455    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:11.362606    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:11.362906    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:11.362906    8536 round_trippers.go:580]     Audit-Id: b5896b7c-0c01-45c2-ae50-333382781561
	I0610 12:31:11.362906    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:11.362906    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:11.362906    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:11.362906    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:11.362906    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:11 GMT
	I0610 12:31:11.363362    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:11.363967    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:11.870385    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:11.870385    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:11.870385    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:11.870385    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:11.874411    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:11.874624    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:11.874624    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:11 GMT
	I0610 12:31:11.874624    8536 round_trippers.go:580]     Audit-Id: 75974c77-7c9c-42f8-a12a-1c4062356981
	I0610 12:31:11.874624    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:11.874624    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:11.874624    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:11.874624    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:11.875048    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1645","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0610 12:31:12.363248    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:12.363326    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:12.363326    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:12.363326    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:12.368022    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:12.368022    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:12.368704    8536 round_trippers.go:580]     Audit-Id: 20dfe8b4-e0fc-464e-a67e-9378ef8bc30c
	I0610 12:31:12.368704    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:12.368704    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:12.368704    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:12.368704    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:12.368704    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:12 GMT
	I0610 12:31:12.369241    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:12.857523    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:12.857523    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:12.857523    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:12.857523    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:12.861079    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:12.862093    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:12.862160    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:12.862160    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:12.862160    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:12.862160    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:12 GMT
	I0610 12:31:12.862160    8536 round_trippers.go:580]     Audit-Id: 27a2a33c-0f97-433e-8b7e-ac51a74032fd
	I0610 12:31:12.862160    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:12.862268    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:13.359103    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:13.359161    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:13.359234    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:13.359234    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:13.363609    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:13.363609    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:13.363609    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:13.363609    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:13.363609    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:13.363609    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:13 GMT
	I0610 12:31:13.363609    8536 round_trippers.go:580]     Audit-Id: 1c38016d-8886-47b0-bb3e-f3f7ab72deee
	I0610 12:31:13.363711    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:13.363979    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:13.364503    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:13.856567    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:13.856567    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:13.856567    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:13.856567    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:13.860142    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:13.860551    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:13.860551    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:13.860551    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:13.860551    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:13.860551    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:13 GMT
	I0610 12:31:13.860551    8536 round_trippers.go:580]     Audit-Id: 0014ddf1-0515-41cc-9499-05608e150ddf
	I0610 12:31:13.860551    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:13.860851    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:14.357777    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:14.357777    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:14.357777    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:14.357777    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:14.366354    8536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:31:14.366354    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:14.366354    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:14.366354    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:14.366354    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:14.366354    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:14 GMT
	I0610 12:31:14.366354    8536 round_trippers.go:580]     Audit-Id: 852a7f36-43e9-487c-a927-6cecde986e56
	I0610 12:31:14.366354    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:14.367125    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:14.856870    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:14.856870    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:14.856870    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:14.856870    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:14.860638    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:14.860638    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:14.860638    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:14 GMT
	I0610 12:31:14.860638    8536 round_trippers.go:580]     Audit-Id: 11acd36a-af8f-4553-951e-85ca9ea63563
	I0610 12:31:14.860638    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:14.860638    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:14.860638    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:14.860638    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:14.861309    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:15.358101    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:15.358101    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:15.358101    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:15.358101    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:15.362673    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:15.362673    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:15.362673    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:15 GMT
	I0610 12:31:15.362673    8536 round_trippers.go:580]     Audit-Id: 8e5f4822-4835-42e5-ab07-f395f11247af
	I0610 12:31:15.362673    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:15.362673    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:15.362673    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:15.362673    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:15.363384    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:15.860012    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:15.860110    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:15.860110    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:15.860110    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:15.864256    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:15.864486    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:15.864486    8536 round_trippers.go:580]     Audit-Id: 982f2ca7-d31b-4059-9b88-021b2bc81b79
	I0610 12:31:15.864486    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:15.864486    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:15.864486    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:15.864486    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:15.864486    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:15 GMT
	I0610 12:31:15.864607    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:15.865087    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:16.361273    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:16.361273    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:16.361273    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:16.361273    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:16.364878    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:16.364878    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:16.364878    8536 round_trippers.go:580]     Audit-Id: fb7a1628-f5cc-4802-97c1-e80408edc392
	I0610 12:31:16.364878    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:16.364878    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:16.364878    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:16.365482    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:16.365482    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:16 GMT
	I0610 12:31:16.365790    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:16.863371    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:16.863371    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:16.863371    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:16.863371    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:16.868740    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:16.868740    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:16.868740    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:16.868740    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:16.868740    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:16.868740    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:16.868740    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:16 GMT
	I0610 12:31:16.868740    8536 round_trippers.go:580]     Audit-Id: f56f538b-0d99-4511-a747-70f09906dd49
	I0610 12:31:16.868740    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:17.361283    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:17.361283    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:17.361283    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:17.361283    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:17.365875    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:17.365875    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:17.366333    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:17.366333    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:17 GMT
	I0610 12:31:17.366333    8536 round_trippers.go:580]     Audit-Id: 09c6beed-7ae1-47c8-8353-4b9e566178be
	I0610 12:31:17.366333    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:17.366333    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:17.366333    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:17.367455    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:17.865450    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:17.865450    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:17.865450    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:17.865450    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:17.871557    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:17.871557    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:17.871557    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:17 GMT
	I0610 12:31:17.871557    8536 round_trippers.go:580]     Audit-Id: 74d7db3e-e1f8-4692-aa8a-208592cf3f0e
	I0610 12:31:17.871557    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:17.871557    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:17.871557    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:17.871557    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:17.872270    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:17.872305    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:18.361564    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:18.361775    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:18.361775    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:18.361775    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:18.367381    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:18.367447    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:18.367447    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:18.367447    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:18.367447    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:18 GMT
	I0610 12:31:18.367447    8536 round_trippers.go:580]     Audit-Id: e43159e7-219d-4a2a-8109-c6df286f0526
	I0610 12:31:18.367447    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:18.367447    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:18.368556    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:18.860165    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:18.860165    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:18.860165    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:18.860165    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:18.864741    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:18.865168    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:18.865168    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:18.865168    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:18.865168    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:18.865237    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:18.865237    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:18 GMT
	I0610 12:31:18.865237    8536 round_trippers.go:580]     Audit-Id: 31193f4f-5a0e-441f-920b-a3a715a135fb
	I0610 12:31:18.866131    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:19.360158    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:19.360158    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:19.360158    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:19.360158    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:19.364731    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:19.364731    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:19.364821    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:19.364821    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:19.364821    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:19.364821    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:19.364821    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:19 GMT
	I0610 12:31:19.364821    8536 round_trippers.go:580]     Audit-Id: 2c11c231-226f-4645-82b9-fd7ad7caebf2
	I0610 12:31:19.365714    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:19.862501    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:19.862716    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:19.862716    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:19.862716    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:19.866342    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:19.866960    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:19.866960    8536 round_trippers.go:580]     Audit-Id: dd35cafe-f16c-4734-a401-3c5ae758eb58
	I0610 12:31:19.866960    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:19.866960    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:19.866960    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:19.866960    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:19.866960    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:19 GMT
	I0610 12:31:19.867315    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:20.364292    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:20.364396    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:20.364396    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:20.364396    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:20.368533    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:20.368533    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:20.368533    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:20.368533    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:20.368533    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:20.368533    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:20.368910    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:20 GMT
	I0610 12:31:20.368910    8536 round_trippers.go:580]     Audit-Id: 7f6d2e80-1ea2-4d43-8ca8-70ff1be9c559
	I0610 12:31:20.369342    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:20.369871    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:20.862384    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:20.862384    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:20.862384    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:20.862384    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:20.866116    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:20.866116    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:20.866116    8536 round_trippers.go:580]     Audit-Id: 4926f0b2-c8f5-4e50-bbaa-dbd8b94d5ed6
	I0610 12:31:20.866116    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:20.866116    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:20.866116    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:20.866116    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:20.866116    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:20 GMT
	I0610 12:31:20.866116    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:21.361116    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:21.361116    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:21.361179    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:21.361179    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:21.365288    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:21.365288    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:21.365388    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:21.365388    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:21.365388    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:21.365388    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:21.365388    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:21 GMT
	I0610 12:31:21.365388    8536 round_trippers.go:580]     Audit-Id: 844bac56-5571-44e0-b7da-3f06f20be76d
	I0610 12:31:21.365857    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:21.857464    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:21.857464    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:21.857464    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:21.857464    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:21.861068    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:21.861068    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:21.861068    8536 round_trippers.go:580]     Audit-Id: 1e325dcb-6413-4a8b-9579-f2ccb7f6d5d3
	I0610 12:31:21.861068    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:21.861068    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:21.861068    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:21.861068    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:21.861068    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:21 GMT
	I0610 12:31:21.861878    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:22.371789    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:22.371883    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:22.371883    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:22.371883    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:22.378186    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:22.378186    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:22.378186    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:22.378186    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:22.378725    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:22.378725    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:22 GMT
	I0610 12:31:22.378725    8536 round_trippers.go:580]     Audit-Id: d7f96531-7830-4482-be7a-9f13e50dd6fb
	I0610 12:31:22.378725    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:22.383890    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:22.384923    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:22.869533    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:22.869693    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:22.869693    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:22.869693    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:22.873138    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:22.873138    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:22.874083    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:22.874107    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:22.874107    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:22 GMT
	I0610 12:31:22.874107    8536 round_trippers.go:580]     Audit-Id: 7c2fdb38-2fee-4ba4-9a69-fecb00a22a0c
	I0610 12:31:22.874107    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:22.874107    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:22.874241    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:23.371629    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:23.371629    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:23.371629    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:23.371629    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:23.378268    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:23.378268    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:23.378268    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:23 GMT
	I0610 12:31:23.378268    8536 round_trippers.go:580]     Audit-Id: 03923fa6-4cfb-4283-a98f-c76fb80bd4b3
	I0610 12:31:23.378268    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:23.378268    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:23.378268    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:23.379202    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:23.379656    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:23.871472    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:23.871472    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:23.871562    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:23.871562    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:23.878517    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:23.878517    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:23.878517    8536 round_trippers.go:580]     Audit-Id: c179c4dc-fc40-42f9-b3e3-9ccbdb015122
	I0610 12:31:23.878517    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:23.878517    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:23.878517    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:23.878517    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:23.878517    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:23 GMT
	I0610 12:31:23.879220    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:24.358793    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:24.358863    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:24.358863    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:24.358863    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:24.362851    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:24.362851    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:24.362851    8536 round_trippers.go:580]     Audit-Id: ca360c92-f767-4e55-a419-0552a7369626
	I0610 12:31:24.362851    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:24.362851    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:24.362851    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:24.362851    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:24.362851    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:24 GMT
	I0610 12:31:24.363528    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:24.856948    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:24.857005    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:24.857005    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:24.857005    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:24.860751    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:24.860751    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:24.860751    8536 round_trippers.go:580]     Audit-Id: 73aa6dca-a1f5-44a3-ba5e-3f70265c9c2e
	I0610 12:31:24.860751    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:24.860751    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:24.861170    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:24.861170    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:24.861170    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:24 GMT
	I0610 12:31:24.861441    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:24.862211    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:25.357660    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:25.357660    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:25.357907    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:25.357907    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:25.363539    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:25.363539    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:25.363539    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:25 GMT
	I0610 12:31:25.363539    8536 round_trippers.go:580]     Audit-Id: 903e0c48-0e77-42ec-b9d1-0baac86ecee2
	I0610 12:31:25.363539    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:25.363539    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:25.363539    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:25.363771    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:25.363941    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:25.870965    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:25.870965    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:25.870965    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:25.870965    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:25.877316    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:25.877874    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:25.877874    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:25.877874    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:25 GMT
	I0610 12:31:25.877874    8536 round_trippers.go:580]     Audit-Id: a7c53500-9ebe-40ed-93a5-77bf3f264e49
	I0610 12:31:25.877874    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:25.877874    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:25.877874    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:25.877874    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:26.370926    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:26.371005    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:26.371067    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:26.371067    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:26.382252    8536 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 12:31:26.383034    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:26.383034    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:26.383034    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:26 GMT
	I0610 12:31:26.383034    8536 round_trippers.go:580]     Audit-Id: 7c47f2b3-a9ea-4689-b7c0-e6a77dee8e6d
	I0610 12:31:26.383034    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:26.383034    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:26.383034    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:26.383422    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:26.857035    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:26.857303    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:26.857303    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:26.857303    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:26.861139    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:26.861139    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:26.861139    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:26 GMT
	I0610 12:31:26.861139    8536 round_trippers.go:580]     Audit-Id: cd58463f-2d1b-483a-9aef-81d962de2284
	I0610 12:31:26.861139    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:26.862023    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:26.862023    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:26.862023    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:26.862223    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:26.862459    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:27.368131    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:27.368384    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:27.368384    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:27.368384    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:27.375233    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:27.375233    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:27.375569    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:27.375569    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:27.375569    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:27.375569    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:27.375569    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:27 GMT
	I0610 12:31:27.375569    8536 round_trippers.go:580]     Audit-Id: c685f387-2006-41de-87d1-2fac2d364dfb
	I0610 12:31:27.375779    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:27.868617    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:27.868617    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:27.868617    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:27.868617    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:27.875205    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:27.875205    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:27.875205    8536 round_trippers.go:580]     Audit-Id: 11154572-8125-449d-aad7-14285bc484fd
	I0610 12:31:27.875264    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:27.875264    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:27.875289    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:27.875289    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:27.875289    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:27 GMT
	I0610 12:31:27.876540    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:28.370393    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:28.370393    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:28.370393    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:28.370393    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:28.373912    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:28.373912    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:28.373912    8536 round_trippers.go:580]     Audit-Id: 272faba4-e44c-4f90-8e7b-014fe09c18ef
	I0610 12:31:28.373912    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:28.373912    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:28.374248    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:28.374248    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:28.374248    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:28 GMT
	I0610 12:31:28.374548    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:28.868158    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:28.868334    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:28.868334    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:28.868334    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:28.872115    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:28.872782    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:28.872782    8536 round_trippers.go:580]     Audit-Id: 5d0f4677-7a25-46c6-a02d-b4674b84bae7
	I0610 12:31:28.872782    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:28.872782    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:28.872782    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:28.872782    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:28.872782    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:28 GMT
	I0610 12:31:28.873177    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:28.873762    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:29.366774    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:29.366869    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:29.366869    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:29.366937    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:29.370058    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:29.370999    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:29.370999    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:29.371050    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:29.371050    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:29.371050    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:29 GMT
	I0610 12:31:29.371050    8536 round_trippers.go:580]     Audit-Id: e3260a1a-2bf0-4a47-b2db-3cbdb7c0fb4b
	I0610 12:31:29.371050    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:29.372056    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:29.864441    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:29.864494    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:29.864549    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:29.864549    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:29.869032    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:29.869085    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:29.869085    8536 round_trippers.go:580]     Audit-Id: f4b8d086-ac92-4e34-9863-da569cfb7415
	I0610 12:31:29.869085    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:29.869085    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:29.869085    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:29.869085    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:29.869085    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:29 GMT
	I0610 12:31:29.869085    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:30.364844    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:30.364844    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:30.364844    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:30.364844    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:30.369311    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:30.369311    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:30.369311    8536 round_trippers.go:580]     Audit-Id: 7a3d5314-3749-46e6-8736-40338ea99b68
	I0610 12:31:30.369311    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:30.369311    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:30.369311    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:30.369311    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:30.369311    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:30 GMT
	I0610 12:31:30.369532    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:30.865549    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:30.865603    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:30.865603    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:30.865603    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:30.871866    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:30.871866    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:30.871866    8536 round_trippers.go:580]     Audit-Id: db76d763-8ba1-4c5b-a42e-2cecd3c0c3db
	I0610 12:31:30.871866    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:30.871866    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:30.871866    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:30.871866    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:30.871866    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:30 GMT
	I0610 12:31:30.872104    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:31.365725    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:31.365796    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:31.365796    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:31.365796    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:31.370234    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:31.370946    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:31.370946    8536 round_trippers.go:580]     Audit-Id: 5383b6e7-d074-4fd4-9db9-4a2e887f2722
	I0610 12:31:31.370946    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:31.370946    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:31.370946    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:31.370946    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:31.370946    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:31 GMT
	I0610 12:31:31.371514    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:31.372138    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:31.862468    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:31.862548    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:31.862548    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:31.862548    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:31.867774    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:31.868107    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:31.868107    8536 round_trippers.go:580]     Audit-Id: a64d1ee7-c0f1-48ea-b59f-a5665fb7089e
	I0610 12:31:31.868107    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:31.868107    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:31.868107    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:31.868107    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:31.868107    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:31 GMT
	I0610 12:31:31.868336    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:32.363307    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:32.363307    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:32.363307    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:32.363307    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:32.367136    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:32.367761    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:32.367761    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:32.367761    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:32.367761    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:32 GMT
	I0610 12:31:32.367761    8536 round_trippers.go:580]     Audit-Id: c8c5eb2a-f110-46da-b0d6-6475e680fc96
	I0610 12:31:32.367761    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:32.367761    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:32.368160    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:32.863033    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:32.863033    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:32.863033    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:32.863033    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:32.866628    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:32.866628    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:32.866628    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:32.866628    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:32 GMT
	I0610 12:31:32.866628    8536 round_trippers.go:580]     Audit-Id: 9fda6572-778c-4a60-a9a1-562a5b61a5e1
	I0610 12:31:32.867340    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:32.867340    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:32.867340    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:32.867512    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:33.362823    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:33.362823    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:33.362823    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:33.362823    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:33.367401    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:33.367644    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:33.367644    8536 round_trippers.go:580]     Audit-Id: 4556772d-63da-4957-8079-37f6421f63ad
	I0610 12:31:33.367644    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:33.367644    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:33.367644    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:33.367644    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:33.367644    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:33 GMT
	I0610 12:31:33.368417    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:33.865974    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:33.866169    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:33.866169    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:33.866169    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:33.868965    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:33.868965    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:33.869734    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:33.869734    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:33.869734    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:33.869734    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:33 GMT
	I0610 12:31:33.869734    8536 round_trippers.go:580]     Audit-Id: edc9d0d1-4814-48f6-9056-9ab14ef05667
	I0610 12:31:33.869734    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:33.870569    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:33.870569    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:34.367368    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:34.367368    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:34.367443    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:34.367443    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:34.374281    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:34.374281    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:34.375217    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:34 GMT
	I0610 12:31:34.375217    8536 round_trippers.go:580]     Audit-Id: 123712e4-511d-4bd4-ba72-cb1ae25a05ef
	I0610 12:31:34.375217    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:34.375217    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:34.375217    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:34.375217    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:34.375708    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:34.866882    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:34.866882    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:34.866882    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:34.866882    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:34.872362    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:34.872484    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:34.872484    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:34.872484    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:34.872484    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:34.872484    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:34 GMT
	I0610 12:31:34.872484    8536 round_trippers.go:580]     Audit-Id: 4201143c-f970-42b2-88a2-b0b877cdef02
	I0610 12:31:34.872484    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:34.872943    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:35.364571    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:35.364571    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:35.364571    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:35.364571    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:35.368215    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:35.368215    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:35.368215    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:35.368215    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:35.368215    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:35 GMT
	I0610 12:31:35.368215    8536 round_trippers.go:580]     Audit-Id: fd09e71b-269c-438a-807d-9a432cee024d
	I0610 12:31:35.368215    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:35.368215    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:35.368215    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:35.867160    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:35.867260    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:35.867330    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:35.867330    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:35.870675    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:35.870675    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:35.870675    8536 round_trippers.go:580]     Audit-Id: 432b1fa0-2b23-4016-a3e0-d9b2b3406905
	I0610 12:31:35.870675    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:35.870675    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:35.871419    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:35.871419    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:35.871419    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:35 GMT
	I0610 12:31:35.871570    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:35.871960    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:36.365961    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:36.365961    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:36.365961    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:36.365961    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:36.372184    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:36.372184    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:36.372184    8536 round_trippers.go:580]     Audit-Id: 9beb8238-9c6e-440c-921a-bc94e4ed2f4b
	I0610 12:31:36.372184    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:36.372184    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:36.372184    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:36.372184    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:36.372184    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:36 GMT
	I0610 12:31:36.372863    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:36.863959    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:36.863959    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:36.863959    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:36.863959    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:36.869673    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:36.869673    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:36.869673    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:36.869673    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:36.869673    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:36.870193    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:36 GMT
	I0610 12:31:36.870193    8536 round_trippers.go:580]     Audit-Id: 47650ea9-8114-4579-968f-197458528810
	I0610 12:31:36.870193    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:36.870307    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:37.363447    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:37.363521    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:37.363593    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:37.363593    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:37.367192    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:37.367192    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:37.367192    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:37.367192    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:37.367192    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:37.367192    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:37.367192    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:37 GMT
	I0610 12:31:37.367192    8536 round_trippers.go:580]     Audit-Id: 6961d302-9957-4018-be0a-c46fbe7c037b
	I0610 12:31:37.369113    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:37.861980    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:37.861980    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:37.861980    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:37.861980    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:37.865953    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:37.865953    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:37.866359    8536 round_trippers.go:580]     Audit-Id: abcbec62-8a5f-4b81-b57d-19ecc2155e36
	I0610 12:31:37.866359    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:37.866359    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:37.866359    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:37.866359    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:37.866359    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:37 GMT
	I0610 12:31:37.866873    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:38.362461    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:38.362461    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:38.362570    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:38.362570    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:38.368350    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:38.368350    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:38.368350    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:38.368350    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:38.368350    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:38.368350    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:38 GMT
	I0610 12:31:38.368350    8536 round_trippers.go:580]     Audit-Id: e186a4a0-b6d7-490a-b85f-d736f69651b5
	I0610 12:31:38.368350    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:38.369387    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:38.369793    8536 node_ready.go:53] node "multinode-813300" has status "Ready":"False"
	I0610 12:31:38.860093    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:38.860384    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:38.860384    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:38.860384    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:38.863760    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:38.864796    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:38.864796    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:38.864796    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:38.864796    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:38.864796    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:38.864796    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:38 GMT
	I0610 12:31:38.864796    8536 round_trippers.go:580]     Audit-Id: 77978789-9ca8-42cc-b693-c333d51015d1
	I0610 12:31:38.865448    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:39.362259    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:39.362259    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:39.362259    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:39.362259    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:39.365816    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:39.365816    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:39.365816    8536 round_trippers.go:580]     Audit-Id: 0a193f20-5eb2-4830-8193-e296cd820111
	I0610 12:31:39.365816    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:39.365816    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:39.365816    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:39.366260    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:39.366260    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:39 GMT
	I0610 12:31:39.366817    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:39.866472    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:39.866472    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:39.866472    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:39.866472    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:39.871011    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:39.871276    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:39.871276    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:39 GMT
	I0610 12:31:39.871276    8536 round_trippers.go:580]     Audit-Id: d06249ae-59f0-45a4-bdc8-2acbc8cb5fc9
	I0610 12:31:39.871276    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:39.871276    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:39.871356    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:39.871356    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:39.871457    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:40.363517    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:40.363517    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.363517    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.363517    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.367813    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:40.367813    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.367813    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.367813    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.367813    8536 round_trippers.go:580]     Audit-Id: cd04bfbe-a408-452d-85b5-c0383bc8bdf9
	I0610 12:31:40.367813    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.367813    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.367813    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.368351    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1756","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0610 12:31:40.861947    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:40.861947    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.861947    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.861947    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.865536    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:40.865536    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.865536    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.865727    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.865727    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.865727    8536 round_trippers.go:580]     Audit-Id: 06c81486-729e-4672-bb6e-374279bb4a68
	I0610 12:31:40.865727    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.865727    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.866092    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:40.866615    8536 node_ready.go:49] node "multinode-813300" has status "Ready":"True"
	I0610 12:31:40.866818    8536 node_ready.go:38] duration metric: took 36.0105676s for node "multinode-813300" to be "Ready" ...
	I0610 12:31:40.866881    8536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:31:40.866934    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:31:40.866934    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.866934    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.866934    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.875248    8536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:31:40.875248    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.875248    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.875248    8536 round_trippers.go:580]     Audit-Id: b1bedf74-b1ba-4268-a91b-3a1af5810495
	I0610 12:31:40.875248    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.875248    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.875248    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.875248    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.878632    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87076 chars]
	I0610 12:31:40.882999    8536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:31:40.883208    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:40.883235    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.883235    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.883235    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.886949    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:40.887025    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.887025    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.887025    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.887025    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.887025    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.887025    8536 round_trippers.go:580]     Audit-Id: cebd202a-d1d2-436f-99a6-9a39287f2ada
	I0610 12:31:40.887025    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.887025    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:40.887896    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:40.887956    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:40.887956    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:40.887956    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:40.890666    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:40.890666    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:40.890666    8536 round_trippers.go:580]     Audit-Id: 71a06392-2b67-437d-905f-c7f4eca8b615
	I0610 12:31:40.890666    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:40.890666    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:40.890666    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:40.890666    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:40.890666    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:40 GMT
	I0610 12:31:40.891651    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:41.393965    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:41.393965    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:41.393965    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:41.393965    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:41.397551    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:41.398038    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:41.398038    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:41.398038    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:41.398038    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:41.398038    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:41 GMT
	I0610 12:31:41.398038    8536 round_trippers.go:580]     Audit-Id: 20167246-01ae-4ed9-b9c4-4ec95a01520a
	I0610 12:31:41.398038    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:41.398434    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:41.399353    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:41.399353    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:41.399353    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:41.399353    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:41.402153    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:41.402153    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:41.402153    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:41.402153    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:41 GMT
	I0610 12:31:41.402153    8536 round_trippers.go:580]     Audit-Id: 9e11eaa6-4f17-4e96-a9cf-f3622de977a1
	I0610 12:31:41.402153    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:41.402153    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:41.402153    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:41.402901    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:41.897170    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:41.897170    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:41.897170    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:41.897170    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:41.901739    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:41.901739    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:41.901739    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:41.901739    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:41.901739    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:41 GMT
	I0610 12:31:41.901739    8536 round_trippers.go:580]     Audit-Id: a23b3e22-69d4-41a6-9b06-261613d1f649
	I0610 12:31:41.901739    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:41.901739    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:41.902169    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:41.903133    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:41.903192    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:41.903192    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:41.903192    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:41.906015    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:41.906015    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:41.906314    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:41.906314    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:41.906314    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:41.906314    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:41.906314    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:41 GMT
	I0610 12:31:41.906314    8536 round_trippers.go:580]     Audit-Id: 8de8b9a1-307d-42b7-884b-da33c34ab51b
	I0610 12:31:41.907189    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:42.395474    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:42.395546    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:42.395546    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:42.395546    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:42.399757    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:42.399757    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:42.399757    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:42.399757    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:42 GMT
	I0610 12:31:42.399757    8536 round_trippers.go:580]     Audit-Id: c5185f0a-e8e3-4b12-9005-aa3cd8edd003
	I0610 12:31:42.399860    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:42.399860    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:42.399860    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:42.400186    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:42.400309    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:42.400309    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:42.400309    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:42.400309    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:42.407948    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:31:42.408282    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:42.408282    8536 round_trippers.go:580]     Audit-Id: 77444e99-e37a-4004-9f28-64e7914d61d4
	I0610 12:31:42.408282    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:42.408282    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:42.408282    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:42.408282    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:42.408382    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:42 GMT
	I0610 12:31:42.408495    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:42.893615    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:42.893615    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:42.893615    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:42.893615    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:42.898185    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:42.898185    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:42.898292    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:42.898292    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:42.898292    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:42 GMT
	I0610 12:31:42.898292    8536 round_trippers.go:580]     Audit-Id: 47cc9d81-bb02-41a4-919e-770d96c4e6c6
	I0610 12:31:42.898292    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:42.898292    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:42.898500    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:42.898956    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:42.898956    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:42.898956    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:42.898956    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:42.904847    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:42.904847    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:42.904847    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:42.904847    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:42.904847    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:42 GMT
	I0610 12:31:42.904847    8536 round_trippers.go:580]     Audit-Id: 52aeba0c-2f4b-40e0-ab4f-04dcab962a1d
	I0610 12:31:42.904847    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:42.904847    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:42.905419    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:42.905636    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:43.384860    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:43.384860    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:43.384860    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:43.384860    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:43.389324    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:43.389324    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:43.390007    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:43.390007    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:43.390007    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:43 GMT
	I0610 12:31:43.390007    8536 round_trippers.go:580]     Audit-Id: f7c02a1c-b43c-4518-ab58-479c7a00e238
	I0610 12:31:43.390007    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:43.390007    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:43.390257    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:43.391354    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:43.391449    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:43.391449    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:43.391543    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:43.394300    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:43.394300    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:43.394300    8536 round_trippers.go:580]     Audit-Id: 3d5fb550-6b08-4aba-bffa-b2606ba0cca1
	I0610 12:31:43.394300    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:43.394300    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:43.394300    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:43.394300    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:43.394300    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:43 GMT
	I0610 12:31:43.394954    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:43.887158    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:43.887158    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:43.887158    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:43.887158    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:43.891745    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:43.891745    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:43.891745    8536 round_trippers.go:580]     Audit-Id: 26bc5778-7a13-471e-9eb8-4268deafe95b
	I0610 12:31:43.891745    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:43.892426    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:43.892426    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:43.892426    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:43.892426    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:43 GMT
	I0610 12:31:43.892787    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:43.893428    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:43.893428    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:43.893428    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:43.893428    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:43.897008    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:43.897008    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:43.897008    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:43.897008    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:43.897008    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:43 GMT
	I0610 12:31:43.897008    8536 round_trippers.go:580]     Audit-Id: ff5b4449-7f45-4706-a063-dfb41bfed254
	I0610 12:31:43.897008    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:43.897008    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:43.897008    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:44.388019    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:44.388019    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:44.388019    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:44.388019    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:44.394092    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:44.394092    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:44.394092    8536 round_trippers.go:580]     Audit-Id: 887b662d-0b46-421d-a490-942070f93ce2
	I0610 12:31:44.394092    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:44.394092    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:44.394092    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:44.394092    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:44.394092    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:44 GMT
	I0610 12:31:44.394092    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:44.394092    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:44.394092    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:44.394092    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:44.394092    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:44.400683    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:44.400683    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:44.400683    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:44.400683    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:44.400683    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:44.400683    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:44 GMT
	I0610 12:31:44.400683    8536 round_trippers.go:580]     Audit-Id: 158d71bf-c09b-4b2a-96c9-085357dbef27
	I0610 12:31:44.400683    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:44.401461    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:44.893190    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:44.893413    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:44.893413    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:44.893413    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:44.897627    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:44.898298    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:44.898298    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:44.898298    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:44.898395    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:44.898395    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:44 GMT
	I0610 12:31:44.898395    8536 round_trippers.go:580]     Audit-Id: 70a7f404-fc27-4843-bc6d-fb64e68b1325
	I0610 12:31:44.898395    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:44.898655    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:44.899724    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:44.899724    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:44.899724    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:44.899724    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:44.903306    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:44.903306    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:44.903306    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:44.903306    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:44 GMT
	I0610 12:31:44.903306    8536 round_trippers.go:580]     Audit-Id: 0c43e73e-df07-42d1-af92-3118f88b0e14
	I0610 12:31:44.903306    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:44.903306    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:44.903306    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:44.903849    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:45.390656    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:45.390867    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:45.390867    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:45.390867    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:45.394685    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:45.394742    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:45.394742    8536 round_trippers.go:580]     Audit-Id: 36b1e8b7-70e0-4424-a52a-def4c4868689
	I0610 12:31:45.394742    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:45.394742    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:45.394742    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:45.394742    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:45.394742    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:45 GMT
	I0610 12:31:45.395720    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:45.396312    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:45.396312    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:45.396312    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:45.396312    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:45.400685    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:45.400685    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:45.400685    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:45.400685    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:45.400685    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:45.400685    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:45 GMT
	I0610 12:31:45.400685    8536 round_trippers.go:580]     Audit-Id: 4f0bd21f-02cb-48a8-a6eb-a791e8a0cd6d
	I0610 12:31:45.400685    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:45.401166    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:45.401360    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:45.890618    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:45.890618    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:45.890618    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:45.890618    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:45.894200    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:45.894200    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:45.894200    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:45.894709    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:45.894709    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:45 GMT
	I0610 12:31:45.894709    8536 round_trippers.go:580]     Audit-Id: e347400d-206c-469f-a710-8f9c73b3329d
	I0610 12:31:45.894709    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:45.894709    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:45.894873    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:45.895757    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:45.895812    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:45.895812    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:45.895812    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:45.910510    8536 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 12:31:45.910510    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:45.910688    8536 round_trippers.go:580]     Audit-Id: f2bed27c-1c92-4323-8df6-3daf6c1c93a1
	I0610 12:31:45.910688    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:45.910688    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:45.910688    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:45.910688    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:45.910688    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:45 GMT
	I0610 12:31:45.911511    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:46.388409    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:46.388409    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:46.388409    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:46.388409    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:46.391453    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:46.391453    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:46.391453    8536 round_trippers.go:580]     Audit-Id: 74a99990-14af-4987-97ba-10c0f95b22b4
	I0610 12:31:46.391453    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:46.391453    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:46.391453    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:46.391453    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:46.392476    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:46 GMT
	I0610 12:31:46.392738    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:46.393349    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:46.393349    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:46.393349    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:46.393349    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:46.396686    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:46.396949    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:46.396949    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:46.396949    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:46.396949    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:46 GMT
	I0610 12:31:46.397050    8536 round_trippers.go:580]     Audit-Id: 477c0a89-a160-4064-bb20-c0b778b541c5
	I0610 12:31:46.397050    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:46.397050    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:46.397119    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:46.890760    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:46.890836    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:46.890836    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:46.890836    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:46.894588    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:46.894588    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:46.895452    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:46.895452    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:46 GMT
	I0610 12:31:46.895452    8536 round_trippers.go:580]     Audit-Id: d394f82c-0010-4b63-8a37-19b12886ab57
	I0610 12:31:46.895452    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:46.895452    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:46.895452    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:46.895688    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:46.896530    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:46.896530    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:46.896530    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:46.896530    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:46.898907    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:46.898907    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:46.898907    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:46 GMT
	I0610 12:31:46.898907    8536 round_trippers.go:580]     Audit-Id: 358971b7-94bb-408d-b4c1-0b03694cc3c3
	I0610 12:31:46.898907    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:46.898907    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:46.899482    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:46.899482    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:46.899625    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:47.397642    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:47.397642    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:47.397642    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:47.397642    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:47.402232    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:47.402347    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:47.402417    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:47.402417    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:47.402417    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:47.402417    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:47.402417    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:47 GMT
	I0610 12:31:47.402417    8536 round_trippers.go:580]     Audit-Id: 8667f2ff-135a-4bb0-a6a2-c1bfff9825ee
	I0610 12:31:47.403339    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:47.404288    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:47.404288    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:47.404288    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:47.404288    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:47.407761    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:47.407761    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:47.407761    8536 round_trippers.go:580]     Audit-Id: 1a3c4bbc-1cef-4f24-9267-eba40737a3b8
	I0610 12:31:47.407761    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:47.407761    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:47.407844    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:47.407844    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:47.407844    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:47 GMT
	I0610 12:31:47.407950    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:47.408630    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:47.888812    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:47.888812    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:47.888812    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:47.888812    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:47.893703    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:47.893774    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:47.893774    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:47.893774    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:47 GMT
	I0610 12:31:47.893774    8536 round_trippers.go:580]     Audit-Id: 938807f2-7b80-4b77-92d9-89082d06391c
	I0610 12:31:47.893774    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:47.893845    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:47.893845    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:47.893907    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:47.893907    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:47.893907    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:47.893907    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:47.893907    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:47.903722    8536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 12:31:47.903722    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:47.903722    8536 round_trippers.go:580]     Audit-Id: 9ba33a89-6eb7-4fe2-8e6a-a148dd324aaa
	I0610 12:31:47.903722    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:47.903722    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:47.903722    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:47.903722    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:47.903722    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:47 GMT
	I0610 12:31:47.904513    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:48.388872    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:48.388925    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:48.388966    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:48.388966    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:48.392340    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:48.393467    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:48.393467    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:48.393467    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:48.393467    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:48.393467    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:48.393467    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:48 GMT
	I0610 12:31:48.393467    8536 round_trippers.go:580]     Audit-Id: 353e44e6-6cee-41a8-9e15-b23addccfa7a
	I0610 12:31:48.393744    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:48.394529    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:48.394585    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:48.394585    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:48.394585    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:48.397058    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:48.397058    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:48.397058    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:48.397058    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:48 GMT
	I0610 12:31:48.397058    8536 round_trippers.go:580]     Audit-Id: 392c6711-5b3e-460e-bba0-fdbb6c09b0bf
	I0610 12:31:48.397058    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:48.397058    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:48.397462    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:48.397844    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:48.888609    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:48.888692    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:48.888692    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:48.888692    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:48.893206    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:48.893206    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:48.893206    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:48.893206    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:48.893206    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:48 GMT
	I0610 12:31:48.893547    8536 round_trippers.go:580]     Audit-Id: cb7eaeaf-54cd-43c5-beae-4890772513a3
	I0610 12:31:48.893547    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:48.893547    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:48.893766    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:48.894529    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:48.894529    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:48.894529    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:48.894529    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:48.897453    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:48.897453    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:48.897453    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:48.897453    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:48.897453    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:48.897453    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:48.897453    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:48 GMT
	I0610 12:31:48.897453    8536 round_trippers.go:580]     Audit-Id: 0f401687-d102-45be-bb7a-f1072cc0df72
	I0610 12:31:48.897453    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:49.390115    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:49.390115    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:49.390115    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:49.390115    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:49.393737    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:49.394246    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:49.394246    8536 round_trippers.go:580]     Audit-Id: 467c97ce-fc51-4bd7-9830-ce68ddab6306
	I0610 12:31:49.394246    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:49.394357    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:49.394357    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:49.394357    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:49.394357    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:49 GMT
	I0610 12:31:49.394357    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:49.395546    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:49.395628    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:49.395628    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:49.395628    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:49.398958    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:49.398958    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:49.398958    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:49 GMT
	I0610 12:31:49.398958    8536 round_trippers.go:580]     Audit-Id: d70aee54-6990-4db7-9322-dc924be95bbd
	I0610 12:31:49.398958    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:49.398958    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:49.398958    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:49.398958    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:49.399473    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:49.888695    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:49.888753    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:49.888824    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:49.888824    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:49.892114    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:49.892114    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:49.892114    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:49.892114    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:49.892114    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:49 GMT
	I0610 12:31:49.893074    8536 round_trippers.go:580]     Audit-Id: 27371142-dca1-4859-8980-e9439ec69651
	I0610 12:31:49.893074    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:49.893074    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:49.893264    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:49.894094    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:49.894094    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:49.894094    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:49.894094    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:49.896907    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:49.896907    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:49.896907    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:49 GMT
	I0610 12:31:49.896907    8536 round_trippers.go:580]     Audit-Id: 5132f8bc-6215-4b5f-8bdc-593b27e47cd8
	I0610 12:31:49.896907    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:49.896907    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:49.896907    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:49.896907    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:49.897627    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:49.897897    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:50.388431    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:50.388525    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:50.388525    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:50.388525    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:50.392035    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:50.392035    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:50.392110    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:50.392110    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:50 GMT
	I0610 12:31:50.392110    8536 round_trippers.go:580]     Audit-Id: bccfa588-230e-473e-a59a-eb9f796f86d9
	I0610 12:31:50.392110    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:50.392110    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:50.392110    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:50.392206    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:50.393251    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:50.393434    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:50.393434    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:50.393434    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:50.395743    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:50.396713    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:50.396713    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:50.396713    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:50.396713    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:50.396713    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:50 GMT
	I0610 12:31:50.396713    8536 round_trippers.go:580]     Audit-Id: dcf61100-4ec6-4dcd-a38c-5094f998079e
	I0610 12:31:50.396713    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:50.396987    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:50.890093    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:50.890093    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:50.890093    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:50.890093    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:50.893633    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:50.893633    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:50.893710    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:50.893710    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:50.893710    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:50.893710    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:50 GMT
	I0610 12:31:50.893710    8536 round_trippers.go:580]     Audit-Id: 30c026f2-12d3-498a-a0cc-70a25575e1ff
	I0610 12:31:50.893801    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:50.894039    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:50.895093    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:50.895122    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:50.895122    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:50.895122    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:50.899131    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:50.899131    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:50.899131    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:50.899131    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:50.899131    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:50.899131    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:50.899342    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:50 GMT
	I0610 12:31:50.899342    8536 round_trippers.go:580]     Audit-Id: 44f40e08-43ab-4aaa-80b4-c0c42902607f
	I0610 12:31:50.899600    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:51.394860    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:51.394860    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:51.394860    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:51.394860    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:51.404675    8536 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 12:31:51.404675    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:51.404675    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:51.404675    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:51.404675    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:51.404675    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:51.405365    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:51 GMT
	I0610 12:31:51.405365    8536 round_trippers.go:580]     Audit-Id: 7c0e2efe-3c74-4922-b1e5-0d447d5a77bd
	I0610 12:31:51.405685    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:51.406531    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:51.406743    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:51.406743    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:51.406743    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:51.410731    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:51.410779    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:51.410779    8536 round_trippers.go:580]     Audit-Id: 3831c8c1-fe1b-4d77-a44f-71f5f9ac2bfa
	I0610 12:31:51.410779    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:51.410779    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:51.410779    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:51.410779    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:51.410779    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:51 GMT
	I0610 12:31:51.412896    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:51.895996    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:51.895996    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:51.895996    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:51.895996    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:51.899561    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:51.899561    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:51.899561    8536 round_trippers.go:580]     Audit-Id: 75aefa48-8374-491c-ba71-5de238d340bd
	I0610 12:31:51.900602    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:51.900624    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:51.900624    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:51.900624    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:51.900624    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:51 GMT
	I0610 12:31:51.900967    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:51.901814    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:51.901814    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:51.901902    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:51.901902    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:51.905255    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:51.905255    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:51.905255    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:51.905255    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:51.905255    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:51.905255    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:51 GMT
	I0610 12:31:51.905255    8536 round_trippers.go:580]     Audit-Id: 211c7ae1-94a7-4c8a-b7a1-ee8099f8c3aa
	I0610 12:31:51.905255    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:51.905255    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:51.906197    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:52.395067    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:52.395188    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:52.395188    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:52.395188    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:52.403510    8536 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 12:31:52.403510    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:52.403510    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:52 GMT
	I0610 12:31:52.403510    8536 round_trippers.go:580]     Audit-Id: 1df576ea-88e2-4612-9902-f5d0c5db1989
	I0610 12:31:52.403510    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:52.403510    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:52.403510    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:52.403510    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:52.404487    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:52.405323    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:52.405482    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:52.405482    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:52.405482    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:52.412442    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:52.412442    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:52.412442    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:52.412442    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:52 GMT
	I0610 12:31:52.412442    8536 round_trippers.go:580]     Audit-Id: 689662bb-6de8-43c1-8301-f7d3d6334113
	I0610 12:31:52.412442    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:52.412442    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:52.412442    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:52.412442    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:52.894259    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:52.894259    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:52.894259    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:52.894259    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:52.899288    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:52.899288    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:52.899288    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:52.899288    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:52 GMT
	I0610 12:31:52.899288    8536 round_trippers.go:580]     Audit-Id: d5da442c-d5f1-4774-a843-1cfa9a480a59
	I0610 12:31:52.899288    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:52.899288    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:52.899288    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:52.899288    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:52.900422    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:52.900422    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:52.900422    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:52.900422    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:52.902865    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:52.902865    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:52.902865    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:52.902865    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:52 GMT
	I0610 12:31:52.902865    8536 round_trippers.go:580]     Audit-Id: dd53dc69-83b0-449e-9712-b32d87352a80
	I0610 12:31:52.902865    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:52.902865    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:52.902865    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:52.903878    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:53.397709    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:53.398003    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:53.398003    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:53.398003    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:53.402830    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:53.402912    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:53.402912    8536 round_trippers.go:580]     Audit-Id: 648cf4be-8c26-4760-a879-a73c68fee464
	I0610 12:31:53.402912    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:53.402912    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:53.402912    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:53.403000    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:53.403000    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:53 GMT
	I0610 12:31:53.403064    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:53.403907    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:53.403907    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:53.403907    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:53.403907    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:53.407416    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:53.407600    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:53.407600    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:53.407600    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:53.407600    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:53.407725    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:53.407725    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:53 GMT
	I0610 12:31:53.407725    8536 round_trippers.go:580]     Audit-Id: af476a44-29a2-421c-a9f6-a79e047a9919
	I0610 12:31:53.408116    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:53.898819    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:53.898893    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:53.898893    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:53.898893    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:53.902740    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:53.903276    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:53.903276    8536 round_trippers.go:580]     Audit-Id: 9d37fdff-b95b-416c-a615-e50616f5bbbf
	I0610 12:31:53.903276    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:53.903276    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:53.903276    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:53.903276    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:53.903276    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:53 GMT
	I0610 12:31:53.903657    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:53.904571    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:53.904642    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:53.904642    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:53.904642    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:53.911066    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:53.911066    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:53.911066    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:53.911066    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:53.911066    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:53 GMT
	I0610 12:31:53.911066    8536 round_trippers.go:580]     Audit-Id: aa07ce39-2b0a-4c55-9e6d-999f5cd3c569
	I0610 12:31:53.911066    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:53.911066    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:53.911066    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:53.911066    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:54.384189    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:54.384189    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:54.384189    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:54.384189    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:54.386897    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:54.386897    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:54.387890    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:54.387890    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:54.387890    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:54.387928    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:54.387928    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:54 GMT
	I0610 12:31:54.387928    8536 round_trippers.go:580]     Audit-Id: 74864d97-b595-4719-ad06-d69234f6cc38
	I0610 12:31:54.388307    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:54.388785    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:54.389321    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:54.389374    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:54.389374    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:54.392406    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:54.392489    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:54.392489    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:54.392489    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:54.392489    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:54.392547    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:54 GMT
	I0610 12:31:54.392547    8536 round_trippers.go:580]     Audit-Id: a954f982-1ca9-4c7d-9439-f0642919de98
	I0610 12:31:54.392547    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:54.393013    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:54.889555    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:54.889555    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:54.889555    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:54.889555    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:54.893119    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:54.893119    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:54.893119    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:54 GMT
	I0610 12:31:54.894097    8536 round_trippers.go:580]     Audit-Id: 331dc0da-d502-4bc8-abc1-99c921623748
	I0610 12:31:54.894097    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:54.894097    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:54.894097    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:54.894199    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:54.894275    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:54.894275    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:54.894275    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:54.894275    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:54.894275    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:54.899183    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:54.899183    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:54.899183    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:54 GMT
	I0610 12:31:54.899183    8536 round_trippers.go:580]     Audit-Id: 2a916e05-56a4-4d92-bbd6-e045002cf12d
	I0610 12:31:54.899183    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:54.899183    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:54.899183    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:54.899183    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:54.899840    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:55.387314    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:55.387314    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:55.387314    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:55.387314    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:55.391061    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:55.391061    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:55.391061    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:55.391061    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:55.391061    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:55 GMT
	I0610 12:31:55.391061    8536 round_trippers.go:580]     Audit-Id: cceb467b-b730-4a00-b13d-702ac7274d72
	I0610 12:31:55.391061    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:55.391061    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:55.391487    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:55.392294    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:55.392377    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:55.392404    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:55.392404    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:55.394408    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:55.394408    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:55.394408    8536 round_trippers.go:580]     Audit-Id: ab2052fb-a56e-4a09-94ca-ade21d8ff858
	I0610 12:31:55.394408    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:55.394408    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:55.394408    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:55.395319    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:55.395319    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:55 GMT
	I0610 12:31:55.395810    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:55.886020    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:55.886093    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:55.886093    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:55.886154    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:55.890321    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:55.890321    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:55.890321    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:55 GMT
	I0610 12:31:55.890321    8536 round_trippers.go:580]     Audit-Id: def65afa-0182-4015-b250-de270ddcbb81
	I0610 12:31:55.890393    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:55.890393    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:55.890393    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:55.890393    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:55.890593    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:55.891423    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:55.891506    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:55.891506    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:55.891580    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:55.897232    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:55.897232    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:55.897232    8536 round_trippers.go:580]     Audit-Id: 01ff2a75-fb24-4cf4-9dd0-d1e8ec935dae
	I0610 12:31:55.897232    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:55.897232    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:55.897232    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:55.897232    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:55.897232    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:55 GMT
	I0610 12:31:55.897887    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:56.386215    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:56.386215    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:56.386215    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:56.386215    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:56.389827    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:56.389827    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:56.389827    8536 round_trippers.go:580]     Audit-Id: 77540a06-2c66-48fb-83c1-05169ef67daa
	I0610 12:31:56.389827    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:56.389827    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:56.389827    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:56.389827    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:56.389827    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:56 GMT
	I0610 12:31:56.390372    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:56.391198    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:56.391198    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:56.391198    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:56.391198    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:56.394518    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:56.394518    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:56.394518    8536 round_trippers.go:580]     Audit-Id: 6784f723-a5b3-4dfd-83c5-6adfa15cacd4
	I0610 12:31:56.394518    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:56.394518    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:56.394518    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:56.394518    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:56.394518    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:56 GMT
	I0610 12:31:56.394903    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:56.395162    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:56.884377    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:56.884377    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:56.884377    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:56.884377    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:56.888022    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:56.888022    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:56.888022    8536 round_trippers.go:580]     Audit-Id: 0c236ae1-d05e-4acd-b0a0-54c467334ef1
	I0610 12:31:56.888022    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:56.888022    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:56.888022    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:56.888918    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:56.888918    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:56 GMT
	I0610 12:31:56.889117    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:56.889924    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:56.890057    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:56.890057    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:56.890057    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:56.893564    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:56.893616    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:56.893616    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:56.893616    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:56.893616    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:56.893616    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:56.893616    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:56 GMT
	I0610 12:31:56.893616    8536 round_trippers.go:580]     Audit-Id: 9a766eb2-2e9f-44db-b4d2-96c65db5c0aa
	I0610 12:31:56.893616    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:57.384342    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:57.384430    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:57.384430    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:57.384532    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:57.389609    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:31:57.389609    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:57.389609    8536 round_trippers.go:580]     Audit-Id: c1c72663-29df-492e-b912-5481e6d7c9d4
	I0610 12:31:57.389609    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:57.389609    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:57.389609    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:57.389609    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:57.389609    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:57 GMT
	I0610 12:31:57.390233    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:57.390999    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:57.391065    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:57.391065    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:57.391065    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:57.395075    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:57.395983    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:57.395983    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:57.395983    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:57 GMT
	I0610 12:31:57.395983    8536 round_trippers.go:580]     Audit-Id: 8dd3578f-0c5f-4ca7-b52e-5c48f689533c
	I0610 12:31:57.395983    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:57.395983    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:57.395983    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:57.395983    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:57.887666    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:57.887666    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:57.887666    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:57.887666    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:57.891243    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:57.891243    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:57.891243    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:57.891243    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:57.891243    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:57.892203    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:57 GMT
	I0610 12:31:57.892203    8536 round_trippers.go:580]     Audit-Id: 0c7f820f-acf5-46f2-8184-90b5914fca2f
	I0610 12:31:57.892203    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:57.892412    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:57.893150    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:57.893211    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:57.893211    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:57.893211    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:57.896460    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:57.896460    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:57.896460    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:57.897153    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:57 GMT
	I0610 12:31:57.897153    8536 round_trippers.go:580]     Audit-Id: 3c48ed7d-2cca-4e91-9b50-2a3004eceb65
	I0610 12:31:57.897153    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:57.897153    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:57.897153    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:57.897219    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:58.389037    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:58.389037    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:58.389132    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:58.389132    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:58.392489    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:58.392489    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:58.392489    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:58.392489    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:58.392489    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:58 GMT
	I0610 12:31:58.392489    8536 round_trippers.go:580]     Audit-Id: d82fd350-72f6-4dd0-b873-2a60338d878f
	I0610 12:31:58.392489    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:58.392489    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:58.393796    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:58.394552    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:58.394552    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:58.394552    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:58.394552    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:58.401373    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:31:58.401896    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:58.401896    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:58 GMT
	I0610 12:31:58.401896    8536 round_trippers.go:580]     Audit-Id: 4e723312-8171-49eb-976e-299d1b32353f
	I0610 12:31:58.401896    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:58.401896    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:58.401896    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:58.401896    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:58.402805    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:58.402805    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:31:58.888868    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:58.888868    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:58.888868    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:58.888868    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:58.894053    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:31:58.894125    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:58.894125    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:58.894125    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:58.894125    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:58.894125    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:58.894125    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:58 GMT
	I0610 12:31:58.894125    8536 round_trippers.go:580]     Audit-Id: cb3c92c1-7b69-4780-a616-000f6f9686b7
	I0610 12:31:58.894125    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:58.895216    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:58.895315    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:58.895315    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:58.895315    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:58.898143    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:58.898143    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:58.898143    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:58.898143    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:58.898143    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:58 GMT
	I0610 12:31:58.898143    8536 round_trippers.go:580]     Audit-Id: 3b62be86-b36c-4b59-938d-145866100929
	I0610 12:31:58.898143    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:58.898143    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:58.898655    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:59.389230    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:59.389558    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:59.389558    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:59.389558    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:59.393456    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:59.393456    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:59.393456    8536 round_trippers.go:580]     Audit-Id: aab38031-b235-4216-8532-a936860f3f8e
	I0610 12:31:59.393456    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:59.393456    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:59.393456    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:59.393456    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:59.393456    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:59 GMT
	I0610 12:31:59.393863    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:59.394675    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:59.394745    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:59.394745    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:59.394745    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:59.397091    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:59.397091    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:59.397091    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:59.397919    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:59.397919    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:59.397919    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:59 GMT
	I0610 12:31:59.397919    8536 round_trippers.go:580]     Audit-Id: b39d8db6-cfab-497e-8cc5-94ee57de9047
	I0610 12:31:59.397919    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:59.398385    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:31:59.886748    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:31:59.886811    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:59.886905    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:59.886905    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:59.890344    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:31:59.891102    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:59.891102    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:59.891102    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:59 GMT
	I0610 12:31:59.891102    8536 round_trippers.go:580]     Audit-Id: d2a32bb1-a3ff-4306-aac9-3487138956d7
	I0610 12:31:59.891102    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:59.891102    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:59.891102    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:59.891427    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:31:59.892165    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:31:59.892165    8536 round_trippers.go:469] Request Headers:
	I0610 12:31:59.892165    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:31:59.892165    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:31:59.894722    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:31:59.894722    8536 round_trippers.go:577] Response Headers:
	I0610 12:31:59.894722    8536 round_trippers.go:580]     Audit-Id: 02518c0b-5c4e-410e-9b6b-bcc7be846a89
	I0610 12:31:59.894722    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:31:59.895209    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:31:59.895209    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:31:59.895209    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:31:59.895209    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:31:59 GMT
	I0610 12:31:59.895338    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:00.387007    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:00.387007    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:00.387007    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:00.387007    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:00.391259    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:00.391259    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:00.391698    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:00.391698    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:00.391698    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:00.391698    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:00.391698    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:00 GMT
	I0610 12:32:00.391698    8536 round_trippers.go:580]     Audit-Id: 09ad5030-5939-4924-a103-8a2424c75246
	I0610 12:32:00.391960    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:00.392886    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:00.392886    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:00.392886    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:00.392886    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:00.396469    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:00.396469    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:00.396617    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:00.396617    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:00.396617    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:00.396617    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:00.396617    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:00 GMT
	I0610 12:32:00.396617    8536 round_trippers.go:580]     Audit-Id: b311b9ff-02f0-4547-88a3-cfa17fb4a565
	I0610 12:32:00.396775    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:00.897899    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:00.898262    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:00.898262    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:00.898262    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:00.902004    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:00.902185    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:00.902185    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:00 GMT
	I0610 12:32:00.902185    8536 round_trippers.go:580]     Audit-Id: 5972290a-208f-4029-9080-37557828a965
	I0610 12:32:00.902185    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:00.902185    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:00.902185    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:00.902185    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:00.903142    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:00.905469    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:00.905469    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:00.905877    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:00.905877    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:00.908613    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:00.908613    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:00.908613    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:00.908613    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:00.908613    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:00.908613    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:00.908613    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:00 GMT
	I0610 12:32:00.908613    8536 round_trippers.go:580]     Audit-Id: 0ff10877-cd2d-4d45-b7b7-3794fc9f8fbb
	I0610 12:32:00.909756    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:00.910226    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:32:01.398130    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:01.398130    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:01.398130    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:01.398130    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:01.402736    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:01.402736    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:01.402736    8536 round_trippers.go:580]     Audit-Id: 64c95df3-f492-42b4-a5cf-7f8b374e5ad4
	I0610 12:32:01.402736    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:01.402736    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:01.402844    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:01.402844    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:01.402844    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:01 GMT
	I0610 12:32:01.403066    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:01.403739    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:01.403739    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:01.403739    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:01.403739    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:01.406583    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:01.406583    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:01.406583    8536 round_trippers.go:580]     Audit-Id: ecba7fbc-785f-405f-bfe6-ae982452641d
	I0610 12:32:01.406583    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:01.406583    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:01.406583    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:01.407186    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:01.407186    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:01 GMT
	I0610 12:32:01.407676    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:01.898052    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:01.898052    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:01.898182    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:01.898182    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:01.903108    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:01.903108    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:01.903108    8536 round_trippers.go:580]     Audit-Id: 180f4d90-f990-48d8-8eff-ed2063c95c66
	I0610 12:32:01.903798    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:01.903798    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:01.903798    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:01.903798    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:01.903798    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:01 GMT
	I0610 12:32:01.904024    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:01.904845    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:01.904845    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:01.904903    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:01.904903    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:01.907325    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:01.907325    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:01.907325    8536 round_trippers.go:580]     Audit-Id: 7805a95c-9503-417e-98f7-10bde24f6457
	I0610 12:32:01.907325    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:01.907325    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:01.907325    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:01.907325    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:01.907325    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:01 GMT
	I0610 12:32:01.908386    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:02.384478    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:02.384478    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:02.384478    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:02.384478    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:02.386594    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:02.386594    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:02.386594    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:02.386594    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:02.386594    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:02.386594    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:02.386594    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:02 GMT
	I0610 12:32:02.386594    8536 round_trippers.go:580]     Audit-Id: ff808251-d9c0-4cdd-a543-148996fb6689
	I0610 12:32:02.388707    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:02.389552    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:02.389552    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:02.389552    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:02.389552    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:02.394130    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:02.394130    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:02.394130    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:02.394130    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:02.394130    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:02.394130    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:02 GMT
	I0610 12:32:02.394130    8536 round_trippers.go:580]     Audit-Id: 1f1ef3ed-633c-4d6f-8235-c54e80ec57ed
	I0610 12:32:02.394130    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:02.394867    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:02.898471    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:02.898553    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:02.898553    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:02.898553    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:02.901990    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:02.901990    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:02.901990    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:02 GMT
	I0610 12:32:02.901990    8536 round_trippers.go:580]     Audit-Id: 76f9a2eb-70c9-4754-841b-fd01e32a08f2
	I0610 12:32:02.901990    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:02.902857    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:02.902857    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:02.902857    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:02.903216    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:02.904495    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:02.904597    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:02.904597    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:02.904597    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:02.909123    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:02.909409    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:02.909465    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:02.909465    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:02.909465    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:02.909536    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:02.909536    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:02 GMT
	I0610 12:32:02.909599    8536 round_trippers.go:580]     Audit-Id: b8f79694-e699-4c29-8be6-daf0bead409f
	I0610 12:32:02.909981    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:02.910792    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:32:03.397740    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:03.397830    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:03.397830    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:03.397830    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:03.400218    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:03.400218    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:03.400218    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:03.400218    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:03 GMT
	I0610 12:32:03.401382    8536 round_trippers.go:580]     Audit-Id: 020e7716-3e5d-4506-8086-93a851176bb1
	I0610 12:32:03.401404    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:03.401423    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:03.401423    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:03.401476    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:03.402567    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:03.402567    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:03.402567    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:03.402567    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:03.405133    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:03.405133    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:03.405407    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:03.405407    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:03.405407    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:03 GMT
	I0610 12:32:03.405407    8536 round_trippers.go:580]     Audit-Id: 0f513b19-db4d-4167-9a56-ebe05280b01e
	I0610 12:32:03.405407    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:03.405407    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:03.405649    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:03.893670    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:03.893670    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:03.893872    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:03.893872    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:03.897191    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:03.897191    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:03.897191    8536 round_trippers.go:580]     Audit-Id: 08204798-b862-4224-a080-43229ee16660
	I0610 12:32:03.897191    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:03.897795    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:03.897795    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:03.897795    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:03.897795    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:03 GMT
	I0610 12:32:03.898006    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:03.899039    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:03.899185    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:03.899185    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:03.899185    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:03.903329    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:03.903329    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:03.903329    8536 round_trippers.go:580]     Audit-Id: f3d5b8ef-1bd6-4132-bedc-1cfad2a97dc7
	I0610 12:32:03.903329    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:03.903329    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:03.903329    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:03.903329    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:03.903329    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:03 GMT
	I0610 12:32:03.903329    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:04.397369    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:04.397476    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:04.397476    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:04.397476    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:04.400937    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:04.400937    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:04.400937    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:04.400937    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:04.400937    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:04.401074    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:04 GMT
	I0610 12:32:04.401074    8536 round_trippers.go:580]     Audit-Id: 85f87a79-354e-4891-a1be-7f3a9425f60d
	I0610 12:32:04.401074    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:04.401421    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:04.402996    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:04.404655    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:04.404655    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:04.404655    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:04.408992    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:04.408992    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:04.408992    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:04.408992    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:04.408992    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:04.408992    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:04 GMT
	I0610 12:32:04.408992    8536 round_trippers.go:580]     Audit-Id: 43e8a598-1ec4-4a1b-a363-3585b259a79e
	I0610 12:32:04.408992    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:04.410084    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:04.891338    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:04.891460    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:04.891460    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:04.891460    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:04.894807    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:04.894986    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:04.894986    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:04.894986    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:04 GMT
	I0610 12:32:04.894986    8536 round_trippers.go:580]     Audit-Id: 1f313cdc-e0e4-4263-8f9b-bbefee4cd981
	I0610 12:32:04.894986    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:04.894986    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:04.894986    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:04.895276    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:04.896059    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:04.896092    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:04.896092    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:04.896190    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:04.899401    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:04.899401    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:04.899401    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:04 GMT
	I0610 12:32:04.899401    8536 round_trippers.go:580]     Audit-Id: 8505f402-ef3c-451b-8dee-9622097faedb
	I0610 12:32:04.899401    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:04.899401    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:04.899401    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:04.899401    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:04.900269    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:05.392721    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:05.392721    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:05.392721    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:05.392721    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:05.400699    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:32:05.400699    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:05.400699    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:05.400699    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:05 GMT
	I0610 12:32:05.400699    8536 round_trippers.go:580]     Audit-Id: 92a99aef-9da1-4a5a-91bd-24d7c020b40e
	I0610 12:32:05.400699    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:05.400699    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:05.400699    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:05.400699    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:05.401728    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:05.401781    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:05.401781    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:05.401781    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:05.408287    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:32:05.408349    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:05.408349    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:05.408349    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:05.408349    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:05.408349    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:05 GMT
	I0610 12:32:05.408349    8536 round_trippers.go:580]     Audit-Id: a82c4c41-3687-4671-a8c0-ad49faef5770
	I0610 12:32:05.408349    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:05.408639    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:05.409481    8536 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"False"
	I0610 12:32:05.883733    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:05.883801    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:05.883801    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:05.883801    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:05.889843    8536 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 12:32:05.890002    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:05.890002    8536 round_trippers.go:580]     Audit-Id: 1f6902d7-3ab7-4ecd-88e8-2d8210758fbe
	I0610 12:32:05.890002    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:05.890002    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:05.890090    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:05.890090    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:05.890090    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:05 GMT
	I0610 12:32:05.890391    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:05.891308    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:05.891401    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:05.891401    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:05.891401    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:05.894692    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:05.894692    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:05.894692    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:05.894692    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:05 GMT
	I0610 12:32:05.894692    8536 round_trippers.go:580]     Audit-Id: 25245a7e-9886-43af-8201-7119096744a2
	I0610 12:32:05.894692    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:05.894692    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:05.894692    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:05.895556    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.383837    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:06.383915    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.383958    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.383958    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.386745    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.386745    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.386745    8536 round_trippers.go:580]     Audit-Id: 9af42174-80c6-4ad3-b20b-e3c7bff12947
	I0610 12:32:06.386745    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.386745    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.386745    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.386745    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.386745    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.387810    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1650","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0610 12:32:06.388373    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.388524    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.388524    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.388524    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.390830    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.391669    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.391669    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.391669    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.391669    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.391669    8536 round_trippers.go:580]     Audit-Id: 29b32f68-30cf-44e8-9987-6a7f27022936
	I0610 12:32:06.391669    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.391669    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.392016    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.890846    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbhvv
	I0610 12:32:06.890908    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.890908    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.890908    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.898711    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:32:06.898711    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.898711    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.898711    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.898711    8536 round_trippers.go:580]     Audit-Id: 27435d63-0bdc-4f00-9adb-9527eb6a456c
	I0610 12:32:06.898818    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.898818    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.898818    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.898948    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1827","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0610 12:32:06.899777    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.899777    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.899777    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.899777    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.903363    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:06.904148    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.904148    8536 round_trippers.go:580]     Audit-Id: 429a5b7f-7329-4fbe-8d96-817e9acce578
	I0610 12:32:06.904148    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.904148    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.904148    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.904148    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.904148    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.904425    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.904425    8536 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.904425    8536 pod_ready.go:81] duration metric: took 26.0212161s for pod "coredns-7db6d8ff4d-kbhvv" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.904425    8536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.905002    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-813300
	I0610 12:32:06.905045    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.905100    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.905100    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.912112    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:32:06.912112    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.912112    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.912112    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.912112    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.912112    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.912112    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.912112    8536 round_trippers.go:580]     Audit-Id: d2bd7eca-1670-4da0-b9a8-1a6449ada2e5
	I0610 12:32:06.912112    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-813300","namespace":"kube-system","uid":"f9259e5e-61e9-4252-b7c6-de5d499eb9c1","resourceVersion":"1765","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.17.150.144:2379","kubernetes.io/config.hash":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.mirror":"76e8893277ba7cea6624561880496e47","kubernetes.io/config.seen":"2024-06-10T12:30:54.120335207Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0610 12:32:06.913144    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.913240    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.913284    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.913284    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.916692    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:06.916692    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.916692    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.916692    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.916692    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.916692    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.916692    8536 round_trippers.go:580]     Audit-Id: a80f8427-8628-48fd-8a2a-4c5fc77cd525
	I0610 12:32:06.916692    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.916692    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.917456    8536 pod_ready.go:92] pod "etcd-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.917456    8536 pod_ready.go:81] duration metric: took 13.0311ms for pod "etcd-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.917600    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.917727    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-813300
	I0610 12:32:06.917727    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.917784    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.917784    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.920518    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.920518    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.920518    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.920518    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.920894    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.920894    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.920894    8536 round_trippers.go:580]     Audit-Id: 6fec4a4e-9615-4800-818f-262efdda4b7b
	I0610 12:32:06.920894    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.921146    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-813300","namespace":"kube-system","uid":"2cf29b2c-a2a9-46ec-bbc8-fe884e97df06","resourceVersion":"1748","creationTimestamp":"2024-06-10T12:31:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.17.150.144:8443","kubernetes.io/config.hash":"180cf4cc399d604c28cc4df1442ebd5a","kubernetes.io/config.mirror":"180cf4cc399d604c28cc4df1442ebd5a","kubernetes.io/config.seen":"2024-06-10T12:30:54.115839018Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:31:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0610 12:32:06.921651    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.921710    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.921710    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.921710    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.923951    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.924645    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.924725    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.924725    8536 round_trippers.go:580]     Audit-Id: 2da9c252-4da7-489b-ab91-7a2644ba3584
	I0610 12:32:06.924725    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.924725    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.924725    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.924725    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.924725    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.925288    8536 pod_ready.go:92] pod "kube-apiserver-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.925288    8536 pod_ready.go:81] duration metric: took 7.6881ms for pod "kube-apiserver-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.925288    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.925437    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-813300
	I0610 12:32:06.925437    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.925437    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.925437    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.928073    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:06.928073    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.928750    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.928750    8536 round_trippers.go:580]     Audit-Id: 8a0a0c9a-163a-419d-ab27-d8be40317c05
	I0610 12:32:06.928750    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.928750    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.928750    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.928750    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.929080    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-813300","namespace":"kube-system","uid":"879be9d7-8b2b-4f58-ba70-61d4e9f3441e","resourceVersion":"1767","creationTimestamp":"2024-06-10T12:08:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.mirror":"37865ce1914dc04a4a0a25e98b80ce35","kubernetes.io/config.seen":"2024-06-10T12:08:00.781970961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0610 12:32:06.929699    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.929764    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.929764    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.929764    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.933370    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:06.933370    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.933370    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.933370    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.933370    8536 round_trippers.go:580]     Audit-Id: 30e99097-8527-4d36-b4bf-efe6d6f664e6
	I0610 12:32:06.933370    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.933370    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.933370    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.933370    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.933370    8536 pod_ready.go:92] pod "kube-controller-manager-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.934364    8536 pod_ready.go:81] duration metric: took 9.0756ms for pod "kube-controller-manager-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.934364    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.934364    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrpvt
	I0610 12:32:06.934364    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.934364    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.934364    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.937391    8536 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 12:32:06.937855    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.937855    8536 round_trippers.go:580]     Audit-Id: aca57c61-3cfa-4d38-bdb3-b0a1f58731dd
	I0610 12:32:06.937855    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.937855    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.937855    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.937855    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.937855    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.938229    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrpvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1","resourceVersion":"1665","creationTimestamp":"2024-06-10T12:08:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0610 12:32:06.938864    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:06.938923    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:06.938923    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:06.938923    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:06.940667    8536 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 12:32:06.941536    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:06.941536    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:06 GMT
	I0610 12:32:06.941601    8536 round_trippers.go:580]     Audit-Id: 6c7d4bf2-4348-46c0-83a3-349388174104
	I0610 12:32:06.941601    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:06.941601    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:06.941601    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:06.941601    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:06.941601    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:06.941601    8536 pod_ready.go:92] pod "kube-proxy-nrpvt" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:06.942182    8536 pod_ready.go:81] duration metric: took 7.8183ms for pod "kube-proxy-nrpvt" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:06.942323    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:07.094112    8536 request.go:629] Waited for 151.6022ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:32:07.094347    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rx2b2
	I0610 12:32:07.094347    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.094347    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.094347    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.097319    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:07.097453    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.097453    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.097453    8536 round_trippers.go:580]     Audit-Id: 61ff617e-56ee-4ec1-b07b-35f5078336fc
	I0610 12:32:07.097453    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.097453    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.097608    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.097608    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.098035    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rx2b2","generateName":"kube-proxy-","namespace":"kube-system","uid":"ce59a99b-a561-4598-9399-147f748433a2","resourceVersion":"1632","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0610 12:32:07.296812    8536 request.go:629] Waited for 197.6622ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:32:07.296922    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m02
	I0610 12:32:07.297040    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.297040    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.297040    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.304019    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:32:07.304937    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.304937    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.304937    8536 round_trippers.go:580]     Audit-Id: eaa352c8-3a82-448c-8873-b1d70fb7b43d
	I0610 12:32:07.304937    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.304937    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.304937    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.304937    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.304937    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m02","uid":"d6c82072-2da7-43ea-a5be-15d2866c6945","resourceVersion":"1817","creationTimestamp":"2024-06-10T12:11:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_11_29_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:11:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4583 chars]
	I0610 12:32:07.304937    8536 pod_ready.go:97] node "multinode-813300-m02" hosting pod "kube-proxy-rx2b2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m02" has status "Ready":"Unknown"
	I0610 12:32:07.304937    8536 pod_ready.go:81] duration metric: took 362.6107ms for pod "kube-proxy-rx2b2" in "kube-system" namespace to be "Ready" ...
	E0610 12:32:07.304937    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300-m02" hosting pod "kube-proxy-rx2b2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m02" has status "Ready":"Unknown"
	I0610 12:32:07.304937    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vw56h" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:07.499191    8536 request.go:629] Waited for 194.0542ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vw56h
	I0610 12:32:07.499381    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vw56h
	I0610 12:32:07.499381    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.499381    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.499381    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.503911    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:07.504002    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.504002    8536 round_trippers.go:580]     Audit-Id: 082572c8-0f20-449d-8a16-f6239b8e40de
	I0610 12:32:07.504002    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.504002    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.504002    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.504002    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.504070    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.504070    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vw56h","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3f9e738-89d2-4776-a212-a1ca28952f7c","resourceVersion":"1595","creationTimestamp":"2024-06-10T12:25:52Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8213c423-4397-473a-9133-614b59e17eec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8213c423-4397-473a-9133-614b59e17eec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0610 12:32:07.702255    8536 request.go:629] Waited for 196.7957ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m03
	I0610 12:32:07.702255    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300-m03
	I0610 12:32:07.702255    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.702475    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.702475    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.706939    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:07.706939    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.706939    8536 round_trippers.go:580]     Audit-Id: 48ed5ece-df30-4f8a-8d72-813f3ac5e860
	I0610 12:32:07.706939    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.706939    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.706939    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.706939    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.707259    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.707491    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300-m03","uid":"7d0b0b62-45c8-40aa-9f7a-5bb189395355","resourceVersion":"1813","creationTimestamp":"2024-06-10T12:25:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T12_25_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4413 chars]
	I0610 12:32:07.708036    8536 pod_ready.go:97] node "multinode-813300-m03" hosting pod "kube-proxy-vw56h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m03" has status "Ready":"Unknown"
	I0610 12:32:07.708036    8536 pod_ready.go:81] duration metric: took 403.096ms for pod "kube-proxy-vw56h" in "kube-system" namespace to be "Ready" ...
	E0610 12:32:07.708036    8536 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-813300-m03" hosting pod "kube-proxy-vw56h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-813300-m03" has status "Ready":"Unknown"
	I0610 12:32:07.708104    8536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:07.903404    8536 request.go:629] Waited for 195.07ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:32:07.903404    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-813300
	I0610 12:32:07.903404    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:07.903404    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:07.903404    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:07.908263    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:07.908263    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:07.908263    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:07 GMT
	I0610 12:32:07.908263    8536 round_trippers.go:580]     Audit-Id: 2b14cc46-f47a-4fa1-bfa8-9f0430821547
	I0610 12:32:07.908420    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:07.908420    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:07.908420    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:07.908420    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:07.908692    8536 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-813300","namespace":"kube-system","uid":"bd85735c-2f0d-48ab-bb0e-83f471c3af0a","resourceVersion":"1742","creationTimestamp":"2024-06-10T12:08:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.mirror":"4d9c84710aef19c4449f4b7691d0af07","kubernetes.io/config.seen":"2024-06-10T12:08:00.781972261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0610 12:32:08.091840    8536 request.go:629] Waited for 182.2159ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:08.092125    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes/multinode-813300
	I0610 12:32:08.092125    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:08.092125    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:08.092125    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:08.096985    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:08.096985    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:08.096985    8536 round_trippers.go:580]     Audit-Id: 2dc8d8ea-72d2-43e5-bff1-f537d7d89d5c
	I0610 12:32:08.096985    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:08.096985    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:08.096985    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:08.096985    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:08.096985    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:08 GMT
	I0610 12:32:08.097927    8536 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-10T12:07:57Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0610 12:32:08.098297    8536 pod_ready.go:92] pod "kube-scheduler-multinode-813300" in "kube-system" namespace has status "Ready":"True"
	I0610 12:32:08.098297    8536 pod_ready.go:81] duration metric: took 390.1901ms for pod "kube-scheduler-multinode-813300" in "kube-system" namespace to be "Ready" ...
	I0610 12:32:08.098494    8536 pod_ready.go:38] duration metric: took 27.231341s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:32:08.098494    8536 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:32:08.108011    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 12:32:08.132906    8536 command_runner.go:130] > d7941126134f
	I0610 12:32:08.133490    8536 logs.go:276] 1 containers: [d7941126134f]
	I0610 12:32:08.147897    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 12:32:08.171623    8536 command_runner.go:130] > 877ee07c1499
	I0610 12:32:08.172935    8536 logs.go:276] 1 containers: [877ee07c1499]
	I0610 12:32:08.181505    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 12:32:08.212642    8536 command_runner.go:130] > 24f3f7e041f9
	I0610 12:32:08.213523    8536 command_runner.go:130] > f2e39052db19
	I0610 12:32:08.213637    8536 logs.go:276] 2 containers: [24f3f7e041f9 f2e39052db19]
	I0610 12:32:08.222946    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 12:32:08.249302    8536 command_runner.go:130] > d90e72ef4670
	I0610 12:32:08.249302    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:32:08.249302    8536 logs.go:276] 2 containers: [d90e72ef4670 bd1a6cd98743]
	I0610 12:32:08.261166    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 12:32:08.286088    8536 command_runner.go:130] > 1de5fa0ef838
	I0610 12:32:08.286088    8536 command_runner.go:130] > afad8b05897e
	I0610 12:32:08.287155    8536 logs.go:276] 2 containers: [1de5fa0ef838 afad8b05897e]
	I0610 12:32:08.300463    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 12:32:08.327222    8536 command_runner.go:130] > 3bee53d5fef9
	I0610 12:32:08.327222    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:32:08.327222    8536 logs.go:276] 2 containers: [3bee53d5fef9 f1409bf44ff1]
	I0610 12:32:08.337171    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 12:32:08.363781    8536 command_runner.go:130] > c3c4316beca6
	I0610 12:32:08.363781    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:32:08.366172    8536 logs.go:276] 2 containers: [c3c4316beca6 c39d54960e7d]
	I0610 12:32:08.366172    8536 logs.go:123] Gathering logs for kube-scheduler [bd1a6cd98743] ...
	I0610 12:32:08.366294    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1a6cd98743"
	I0610 12:32:08.397218    8536 command_runner.go:130] ! I0610 12:07:55.711360       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.417322       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.417963       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.418046       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.418071       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.459055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.460659       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.464904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.464952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.466483       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:08.397284    8536 command_runner.go:130] ! I0610 12:07:57.466650       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.502453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! E0610 12:07:57.507264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.503672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.506076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.506243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.506320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:08.397284    8536 command_runner.go:130] ! W0610 12:07:57.506362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.397845    8536 command_runner.go:130] ! W0610 12:07:57.506402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:08.397943    8536 command_runner.go:130] ! W0610 12:07:57.506651       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.398011    8536 command_runner.go:130] ! W0610 12:07:57.506722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:08.398078    8536 command_runner.go:130] ! W0610 12:07:57.507113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398078    8536 command_runner.go:130] ! W0610 12:07:57.507193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:08.398161    8536 command_runner.go:130] ! E0610 12:07:57.511548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398242    8536 command_runner.go:130] ! E0610 12:07:57.511795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:08.398328    8536 command_runner.go:130] ! E0610 12:07:57.512240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398387    8536 command_runner.go:130] ! E0610 12:07:57.512647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:08.398470    8536 command_runner.go:130] ! E0610 12:07:57.515128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398532    8536 command_runner.go:130] ! E0610 12:07:57.515218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:08.398532    8536 command_runner.go:130] ! E0610 12:07:57.515698       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.398665    8536 command_runner.go:130] ! E0610 12:07:57.516017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:08.398733    8536 command_runner.go:130] ! E0610 12:07:57.516332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.398795    8536 command_runner.go:130] ! E0610 12:07:57.516529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:08.398849    8536 command_runner.go:130] ! W0610 12:07:57.537276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:08.398994    8536 command_runner.go:130] ! E0610 12:07:57.537491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:57.537680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:57.538611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:57.537622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:57.538734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:57.538013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:57.539237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:58.345815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:58.345914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:58.356843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:58.357045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:58.406587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! E0610 12:07:58.406863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399052    8536 command_runner.go:130] ! W0610 12:07:58.426795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399592    8536 command_runner.go:130] ! E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:08.399659    8536 command_runner.go:130] ! W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.400198    8536 command_runner.go:130] ! E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:08.400198    8536 command_runner.go:130] ! W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:08.400198    8536 command_runner.go:130] ! E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:08.400392    8536 command_runner.go:130] ! W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:08.400392    8536 command_runner.go:130] ! E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:08.400466    8536 command_runner.go:130] ! I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:08.400527    8536 command_runner.go:130] ! E0610 12:28:16.298586       1 run.go:74] "command failed" err="finished without leader elect"
	I0610 12:32:08.413382    8536 logs.go:123] Gathering logs for kube-proxy [1de5fa0ef838] ...
	I0610 12:32:08.413382    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1de5fa0ef838"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.254962       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.294630       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.150.144"]
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.403290       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.403338       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.403357       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.416009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.416300       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.416345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.424657       1 config.go:192] "Starting service config controller"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.425325       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.425369       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.425382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.432037       1 config.go:319] "Starting node config controller"
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.432075       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.535663       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.535744       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:08.445001    8536 command_runner.go:130] ! I0610 12:31:02.535786       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:08.447997    8536 logs.go:123] Gathering logs for kube-apiserver [d7941126134f] ...
	I0610 12:32:08.448111    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7941126134f"
	I0610 12:32:08.477367    8536 command_runner.go:130] ! I0610 12:30:56.783636       1 options.go:221] external host was not specified, using 172.17.150.144
	I0610 12:32:08.477367    8536 command_runner.go:130] ! I0610 12:30:56.802716       1 server.go:148] Version: v1.30.1
	I0610 12:32:08.477367    8536 command_runner.go:130] ! I0610 12:30:56.802771       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.477919    8536 command_runner.go:130] ! I0610 12:30:57.206580       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 12:32:08.478050    8536 command_runner.go:130] ! I0610 12:30:57.224598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:08.478073    8536 command_runner.go:130] ! I0610 12:30:57.225809       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 12:32:08.478209    8536 command_runner.go:130] ! I0610 12:30:57.226002       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 12:32:08.478269    8536 command_runner.go:130] ! I0610 12:30:57.226365       1 instance.go:299] Using reconciler: lease
	I0610 12:32:08.478269    8536 command_runner.go:130] ! I0610 12:30:57.637999       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0610 12:32:08.478312    8536 command_runner.go:130] ! W0610 12:30:57.638403       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478312    8536 command_runner.go:130] ! I0610 12:30:58.007103       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0610 12:32:08.478312    8536 command_runner.go:130] ! I0610 12:30:58.008169       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0610 12:32:08.478365    8536 command_runner.go:130] ! I0610 12:30:58.357732       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0610 12:32:08.478365    8536 command_runner.go:130] ! I0610 12:30:58.553660       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0610 12:32:08.478407    8536 command_runner.go:130] ! I0610 12:30:58.567826       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0610 12:32:08.478407    8536 command_runner.go:130] ! W0610 12:30:58.567936       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478477    8536 command_runner.go:130] ! W0610 12:30:58.567947       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.478477    8536 command_runner.go:130] ! I0610 12:30:58.569137       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0610 12:32:08.478518    8536 command_runner.go:130] ! W0610 12:30:58.569236       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478518    8536 command_runner.go:130] ! I0610 12:30:58.570636       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0610 12:32:08.478518    8536 command_runner.go:130] ! I0610 12:30:58.572063       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0610 12:32:08.478570    8536 command_runner.go:130] ! W0610 12:30:58.572082       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0610 12:32:08.478570    8536 command_runner.go:130] ! W0610 12:30:58.572088       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0610 12:32:08.478603    8536 command_runner.go:130] ! I0610 12:30:58.575154       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0610 12:32:08.478653    8536 command_runner.go:130] ! W0610 12:30:58.575194       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0610 12:32:08.478653    8536 command_runner.go:130] ! I0610 12:30:58.576862       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0610 12:32:08.478704    8536 command_runner.go:130] ! W0610 12:30:58.576966       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478704    8536 command_runner.go:130] ! W0610 12:30:58.576976       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.478754    8536 command_runner.go:130] ! I0610 12:30:58.577920       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0610 12:32:08.478754    8536 command_runner.go:130] ! W0610 12:30:58.578059       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478803    8536 command_runner.go:130] ! W0610 12:30:58.578305       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478871    8536 command_runner.go:130] ! I0610 12:30:58.579295       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0610 12:32:08.478907    8536 command_runner.go:130] ! I0610 12:30:58.581807       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0610 12:32:08.478907    8536 command_runner.go:130] ! W0610 12:30:58.581943       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.478907    8536 command_runner.go:130] ! W0610 12:30:58.582127       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.478907    8536 command_runner.go:130] ! I0610 12:30:58.583254       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0610 12:32:08.478971    8536 command_runner.go:130] ! W0610 12:30:58.583359       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479047    8536 command_runner.go:130] ! W0610 12:30:58.583370       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479047    8536 command_runner.go:130] ! I0610 12:30:58.594003       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0610 12:32:08.479094    8536 command_runner.go:130] ! W0610 12:30:58.594046       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0610 12:32:08.479094    8536 command_runner.go:130] ! I0610 12:30:58.597008       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0610 12:32:08.479176    8536 command_runner.go:130] ! W0610 12:30:58.597028       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479176    8536 command_runner.go:130] ! W0610 12:30:58.597047       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479176    8536 command_runner.go:130] ! I0610 12:30:58.597658       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0610 12:32:08.479238    8536 command_runner.go:130] ! W0610 12:30:58.597679       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479238    8536 command_runner.go:130] ! W0610 12:30:58.597686       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479238    8536 command_runner.go:130] ! I0610 12:30:58.602889       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0610 12:32:08.479305    8536 command_runner.go:130] ! W0610 12:30:58.602907       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479305    8536 command_runner.go:130] ! W0610 12:30:58.602913       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479305    8536 command_runner.go:130] ! I0610 12:30:58.608646       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0610 12:32:08.479305    8536 command_runner.go:130] ! I0610 12:30:58.610262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0610 12:32:08.479368    8536 command_runner.go:130] ! W0610 12:30:58.610275       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0610 12:32:08.479368    8536 command_runner.go:130] ! W0610 12:30:58.610281       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479368    8536 command_runner.go:130] ! I0610 12:30:58.619816       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0610 12:32:08.479368    8536 command_runner.go:130] ! W0610 12:30:58.619856       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0610 12:32:08.479435    8536 command_runner.go:130] ! W0610 12:30:58.619866       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0610 12:32:08.479435    8536 command_runner.go:130] ! I0610 12:30:58.627044       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0610 12:32:08.479435    8536 command_runner.go:130] ! W0610 12:30:58.627092       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479519    8536 command_runner.go:130] ! W0610 12:30:58.627296       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:08.479519    8536 command_runner.go:130] ! I0610 12:30:58.629017       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0610 12:32:08.479519    8536 command_runner.go:130] ! W0610 12:30:58.629067       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479519    8536 command_runner.go:130] ! I0610 12:30:58.659122       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0610 12:32:08.479582    8536 command_runner.go:130] ! W0610 12:30:58.659244       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:08.479582    8536 command_runner.go:130] ! I0610 12:30:59.341469       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:08.479582    8536 command_runner.go:130] ! I0610 12:30:59.341814       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:08.479639    8536 command_runner.go:130] ! I0610 12:30:59.341806       1 secure_serving.go:213] Serving securely on [::]:8443
	I0610 12:32:08.479639    8536 command_runner.go:130] ! I0610 12:30:59.342486       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0610 12:32:08.479720    8536 command_runner.go:130] ! I0610 12:30:59.342867       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0610 12:32:08.479720    8536 command_runner.go:130] ! I0610 12:30:59.342901       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0610 12:32:08.479777    8536 command_runner.go:130] ! I0610 12:30:59.342987       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0610 12:32:08.479777    8536 command_runner.go:130] ! I0610 12:30:59.341865       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:08.479777    8536 command_runner.go:130] ! I0610 12:30:59.344865       1 controller.go:116] Starting legacy_token_tracking_controller
	I0610 12:32:08.479777    8536 command_runner.go:130] ! I0610 12:30:59.344899       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0610 12:32:08.479841    8536 command_runner.go:130] ! I0610 12:30:59.346737       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0610 12:32:08.479841    8536 command_runner.go:130] ! I0610 12:30:59.346910       1 available_controller.go:423] Starting AvailableConditionController
	I0610 12:32:08.479841    8536 command_runner.go:130] ! I0610 12:30:59.346960       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0610 12:32:08.479897    8536 command_runner.go:130] ! I0610 12:30:59.347078       1 aggregator.go:163] waiting for initial CRD sync...
	I0610 12:32:08.479897    8536 command_runner.go:130] ! I0610 12:30:59.347170       1 controller.go:78] Starting OpenAPI AggregationController
	I0610 12:32:08.479897    8536 command_runner.go:130] ! I0610 12:30:59.347256       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0610 12:32:08.479971    8536 command_runner.go:130] ! I0610 12:30:59.347656       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0610 12:32:08.479971    8536 command_runner.go:130] ! I0610 12:30:59.347947       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0610 12:32:08.479971    8536 command_runner.go:130] ! I0610 12:30:59.348233       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0610 12:32:08.479971    8536 command_runner.go:130] ! I0610 12:30:59.348295       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0610 12:32:08.480047    8536 command_runner.go:130] ! I0610 12:30:59.341877       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0610 12:32:08.480047    8536 command_runner.go:130] ! I0610 12:30:59.377996       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0610 12:32:08.480047    8536 command_runner.go:130] ! I0610 12:30:59.378109       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0610 12:32:08.480104    8536 command_runner.go:130] ! I0610 12:30:59.378362       1 controller.go:139] Starting OpenAPI controller
	I0610 12:32:08.480104    8536 command_runner.go:130] ! I0610 12:30:59.378742       1 controller.go:87] Starting OpenAPI V3 controller
	I0610 12:32:08.480153    8536 command_runner.go:130] ! I0610 12:30:59.378883       1 naming_controller.go:291] Starting NamingConditionController
	I0610 12:32:08.480153    8536 command_runner.go:130] ! I0610 12:30:59.379043       1 establishing_controller.go:76] Starting EstablishingController
	I0610 12:32:08.480178    8536 command_runner.go:130] ! I0610 12:30:59.379247       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 12:32:08.480178    8536 command_runner.go:130] ! I0610 12:30:59.379438       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 12:32:08.480223    8536 command_runner.go:130] ! I0610 12:30:59.379518       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 12:32:08.480223    8536 command_runner.go:130] ! I0610 12:30:59.379777       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:08.480261    8536 command_runner.go:130] ! I0610 12:30:59.379999       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:08.480289    8536 command_runner.go:130] ! I0610 12:30:59.524664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:08.480289    8536 command_runner.go:130] ! I0610 12:30:59.525326       1 policy_source.go:224] refreshing policies
	I0610 12:32:08.480341    8536 command_runner.go:130] ! I0610 12:30:59.543486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 12:32:08.480341    8536 command_runner.go:130] ! I0610 12:30:59.547084       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 12:32:08.480341    8536 command_runner.go:130] ! I0610 12:30:59.548579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 12:32:08.480341    8536 command_runner.go:130] ! I0610 12:30:59.549972       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 12:32:08.480415    8536 command_runner.go:130] ! I0610 12:30:59.550011       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 12:32:08.480415    8536 command_runner.go:130] ! I0610 12:30:59.551151       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 12:32:08.480415    8536 command_runner.go:130] ! I0610 12:30:59.554229       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 12:32:08.480415    8536 command_runner.go:130] ! I0610 12:30:59.560228       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 12:32:08.480479    8536 command_runner.go:130] ! I0610 12:30:59.578343       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 12:32:08.480479    8536 command_runner.go:130] ! I0610 12:30:59.578414       1 aggregator.go:165] initial CRD sync complete...
	I0610 12:32:08.480479    8536 command_runner.go:130] ! I0610 12:30:59.578429       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 12:32:08.480479    8536 command_runner.go:130] ! I0610 12:30:59.578437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 12:32:08.480545    8536 command_runner.go:130] ! I0610 12:30:59.578466       1 cache.go:39] Caches are synced for autoregister controller
	I0610 12:32:08.480545    8536 command_runner.go:130] ! I0610 12:30:59.606740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 12:32:08.480545    8536 command_runner.go:130] ! I0610 12:31:00.360768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 12:32:08.480545    8536 command_runner.go:130] ! W0610 12:31:00.893787       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.150.144]
	I0610 12:32:08.480609    8536 command_runner.go:130] ! I0610 12:31:00.913283       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:32:08.480609    8536 command_runner.go:130] ! I0610 12:31:00.933946       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:32:08.480665    8536 command_runner.go:130] ! I0610 12:31:02.471259       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:32:08.480665    8536 command_runner.go:130] ! I0610 12:31:02.690867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 12:32:08.480665    8536 command_runner.go:130] ! I0610 12:31:02.714405       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:32:08.480665    8536 command_runner.go:130] ! I0610 12:31:02.840117       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 12:32:08.480728    8536 command_runner.go:130] ! I0610 12:31:02.856715       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 12:32:08.490090    8536 logs.go:123] Gathering logs for etcd [877ee07c1499] ...
	I0610 12:32:08.490090    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877ee07c1499"
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.207374Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208407Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.150.144:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.150.144:2380","--initial-cluster=multinode-813300=https://172.17.150.144:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.150.144:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.150.144:2380","--name=multinode-813300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208499Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.208577Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208593Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.150.144:2380"]}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208715Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.218326Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"]}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.22047Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-813300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"ini
tial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.244201Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.944438ms"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.274404Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.303075Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","commit-index":1913}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=()"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became follower at term 2"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8f4442f54c46fb8d [peers: [], term: 2, commit: 1913, applied: 0, lastindex: 1913, lastterm: 2]"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.318917Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.323726Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1273}
	I0610 12:32:08.517379    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.328272Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1642}
	I0610 12:32:08.518372    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.335671Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0610 12:32:08.518422    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.347777Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8f4442f54c46fb8d","timeout":"7s"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.349755Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8f4442f54c46fb8d"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.350228Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8f4442f54c46fb8d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.352715Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.36067Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361057Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0610 12:32:08.518496    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361302Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0610 12:32:08.518754    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=(10323449867154160525)"}
	I0610 12:32:08.518754    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363612Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","added-peer-id":"8f4442f54c46fb8d","added-peer-peer-urls":["https://172.17.159.171:2380"]}
	I0610 12:32:08.518754    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364067Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	I0610 12:32:08.518834    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0610 12:32:08.518875    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.367772Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:08.518971    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.373962Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.150.144:2380"}
	I0610 12:32:08.518971    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.374209Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.150.144:2380"}
	I0610 12:32:08.519017    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375497Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8f4442f54c46fb8d","initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0610 12:32:08.519058    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375805Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0610 12:32:08.519103    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d is starting a new election at term 2"}
	I0610 12:32:08.519143    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.50539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became pre-candidate at term 2"}
	I0610 12:32:08.519193    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgPreVoteResp from 8f4442f54c46fb8d at term 2"}
	I0610 12:32:08.519233    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became candidate at term 3"}
	I0610 12:32:08.519233    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgVoteResp from 8f4442f54c46fb8d at term 3"}
	I0610 12:32:08.519279    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 3"}
	I0610 12:32:08.519279    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 3"}
	I0610 12:32:08.519318    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.511486Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.150.144:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	I0610 12:32:08.519362    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:08.519362    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:08.519401    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.517481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0610 12:32:08.519446    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0610 12:32:08.519486    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520973Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0610 12:32:08.519486    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.543402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.150.144:2379"}
	I0610 12:32:08.528756    8536 logs.go:123] Gathering logs for coredns [24f3f7e041f9] ...
	I0610 12:32:08.528756    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f3f7e041f9"
	I0610 12:32:08.557354    8536 command_runner.go:130] > .:53
	I0610 12:32:08.557354    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:08.557354    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:08.558174    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:08.558174    8536 command_runner.go:130] > [INFO] 127.0.0.1:34387 - 41508 "HINFO IN 7171992165040069679.5605173313288368349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051230172s
	I0610 12:32:08.559962    8536 logs.go:123] Gathering logs for kube-scheduler [d90e72ef4670] ...
	I0610 12:32:08.560013    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90e72ef4670"
	I0610 12:32:08.589079    8536 command_runner.go:130] ! I0610 12:30:56.811878       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:08.589079    8536 command_runner.go:130] ! W0610 12:30:59.481898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:08.589617    8536 command_runner.go:130] ! W0610 12:30:59.482123       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:08.589617    8536 command_runner.go:130] ! W0610 12:30:59.482217       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:08.589617    8536 command_runner.go:130] ! W0610 12:30:59.482255       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.514164       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.514266       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.518405       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.518496       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.518958       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:08.589722    8536 command_runner.go:130] ! I0610 12:30:59.519337       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:08.589894    8536 command_runner.go:130] ! I0610 12:30:59.619122       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:08.592762    8536 logs.go:123] Gathering logs for kubelet ...
	I0610 12:32:08.592854    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:48 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322075    1392 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322142    1392 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.324143    1392 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:08.625320    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: E0610 12:30:49.325228    1392 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:08.625543    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:08.625543    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:08.625602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:08.625602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625675    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078361    1448 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:08.625675    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078445    1448 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.625718    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078696    1448 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:08.625718    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: E0610 12:30:50.078819    1448 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:08.625718    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:08.625783    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:08.625832    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625832    8536 command_runner.go:130] > Jun 10 12:30:53 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:08.625878    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021338    1528 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:08.625878    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021853    1528 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.625942    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.022286    1528 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:08.625978    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.024650    1528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0610 12:32:08.626018    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.040752    1528 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:08.626018    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.082883    1528 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0610 12:32:08.626053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.083180    1528 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0610 12:32:08.626092    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085143    1528 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0610 12:32:08.626166    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085256    1528 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-813300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0610 12:32:08.626218    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.086924    1528 topology_manager.go:138] "Creating topology manager with none policy"
	I0610 12:32:08.626218    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.087122    1528 container_manager_linux.go:301] "Creating device plugin manager"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.088486    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.090915    1528 kubelet.go:400] "Attempting to sync node with API server"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091108    1528 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091402    1528 kubelet.go:312] "Adding apiserver pod source"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.092259    1528 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.097253    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.097520    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.099693    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.099740    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.099843    1528 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.102710    1528 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.103981    1528 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.107194    1528 server.go:1264] "Started kubelet"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.120692    1528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.122088    1528 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.125028    1528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.128857    1528 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.132449    1528 server.go:455] "Adding debug handlers to kubelet server"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.124281    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.137444    1528 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.139221    1528 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.141909    1528 factory.go:221] Registration of the systemd container factory successfully
	I0610 12:32:08.626258    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147241    1528 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0610 12:32:08.626809    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147375    1528 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0610 12:32:08.626877    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.144942    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="200ms"
	I0610 12:32:08.626877    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.143108    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.627057    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.154145    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.627057    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.179909    1528 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0610 12:32:08.627110    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180022    1528 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0610 12:32:08.627110    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180086    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:08.627149    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181162    1528 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0610 12:32:08.627149    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181233    1528 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0610 12:32:08.627191    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181261    1528 policy_none.go:49] "None policy: Start"
	I0610 12:32:08.627191    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.192385    1528 reconciler.go:26] "Reconciler: start to sync state"
	I0610 12:32:08.627231    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193179    1528 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0610 12:32:08.627231    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193256    1528 state_mem.go:35] "Initializing new in-memory state store"
	I0610 12:32:08.627308    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193830    1528 state_mem.go:75] "Updated machine memory state"
	I0610 12:32:08.627343    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.197194    1528 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0610 12:32:08.627343    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.204265    1528 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0610 12:32:08.627386    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.219894    1528 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0610 12:32:08.627386    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.226098    1528 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-813300\" not found"
	I0610 12:32:08.627421    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.226649    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0610 12:32:08.627490    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.230123    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0610 12:32:08.627490    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231021    1528 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0610 12:32:08.627490    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231133    1528 kubelet.go:2337] "Starting kubelet main sync loop"
	I0610 12:32:08.627534    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.231189    1528 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0610 12:32:08.627534    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.244084    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.247037    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.247227    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.253607    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.255809    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:08.627570    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:08.627733    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:08.627733    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:08.627733    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:08.627815    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334683    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62db1c721951a36c62a6369a30c651a661eb2871f8363fa341ef8ad7b7080a07"
	I0610 12:32:08.627862    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334742    1528 topology_manager.go:215] "Topology Admit Handler" podUID="180cf4cc399d604c28cc4df1442ebd5a" podNamespace="kube-system" podName="kube-apiserver-multinode-813300"
	I0610 12:32:08.627862    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.336338    1528 topology_manager.go:215] "Topology Admit Handler" podUID="37865ce1914dc04a4a0a25e98b80ce35" podNamespace="kube-system" podName="kube-controller-manager-multinode-813300"
	I0610 12:32:08.627912    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.338106    1528 topology_manager.go:215] "Topology Admit Handler" podUID="4d9c84710aef19c4449f4b7691d0af07" podNamespace="kube-system" podName="kube-scheduler-multinode-813300"
	I0610 12:32:08.627912    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340794    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7d28a97ba1c48cbe8edd3eab76f64cdcdebf920a03921644f63d12856b642f0"
	I0610 12:32:08.627972    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340848    1528 topology_manager.go:215] "Topology Admit Handler" podUID="76e8893277ba7cea6624561880496e47" podNamespace="kube-system" podName="etcd-multinode-813300"
	I0610 12:32:08.627972    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.341927    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04d7b3d4fcc648cd6b447a383defba86200f1071acc892670457ebeebb52f22"
	I0610 12:32:08.627972    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.342208    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bc6043f7b92f091f4ceee7db3e11617072391c6e5303f4ecdafdb06d4b585a"
	I0610 12:32:08.628049    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.356667    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="400ms"
	I0610 12:32:08.628049    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.365771    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c"
	I0610 12:32:08.628109    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.380268    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b6aa9a0e1d1cbcee858808fc74f396cfba20777f2316093484920397e9b4ca"
	I0610 12:32:08.628153    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628226    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397846    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-ca-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:08.628267    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397877    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:08.628307    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397922    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-flexvolume-dir\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628363    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397961    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-k8s-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628363    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397979    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-kubeconfig\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628445    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398000    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-data\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:08.628486    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398019    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-k8s-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:08.628538    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398038    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-ca-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:08.628538    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398055    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d9c84710aef19c4449f4b7691d0af07-kubeconfig\") pod \"kube-scheduler-multinode-813300\" (UID: \"4d9c84710aef19c4449f4b7691d0af07\") " pod="kube-system/kube-scheduler-multinode-813300"
	I0610 12:32:08.628606    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398073    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-certs\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:08.628663    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.400870    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d"
	I0610 12:32:08.628692    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.416196    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689b8976cc0293bf6ae2ffaf7abbe0a59cfa7521907fd652e86da3912515d25d"
	I0610 12:32:08.628767    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.442360    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10e49596de5e51f9986bebf2105f07084a083e5e8c2ab50684531210b032662"
	I0610 12:32:08.628799    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.454932    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.456598    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.759421    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="800ms"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.858477    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.859580    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.205231    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.205310    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.248476    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.249836    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.406658    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.406731    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.487592    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.561164    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="1.6s"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.661352    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.663943    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.751130    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.628827    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.751205    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:08.629409    8536 command_runner.go:130] > Jun 10 12:30:56 multinode-813300 kubelet[1528]: E0610 12:30:56.215699    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:08.629531    8536 command_runner.go:130] > Jun 10 12:30:57 multinode-813300 kubelet[1528]: I0610 12:30:57.265569    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:08.629531    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636898    1528 kubelet_node_status.go:112] "Node was previously registered" node="multinode-813300"
	I0610 12:32:08.629531    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636993    1528 kubelet_node_status.go:76] "Successfully registered node" node="multinode-813300"
	I0610 12:32:08.629592    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.638685    1528 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0610 12:32:08.629728    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639257    1528 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0610 12:32:08.629796    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639985    1528 setters.go:580] "Node became not ready" node="multinode-813300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-10T12:30:59Z","lastTransitionTime":"2024-06-10T12:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0610 12:32:08.629832    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.103240    1528 apiserver.go:52] "Watching apiserver"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109200    1528 topology_manager.go:215] "Topology Admit Handler" podUID="40bf0aff-00b2-40c7-bed7-52b8cadbc3a1" podNamespace="kube-system" podName="kube-proxy-nrpvt"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109472    1528 topology_manager.go:215] "Topology Admit Handler" podUID="aad8124e-6c05-4719-9adb-edc11b3cce42" podNamespace="kube-system" podName="kindnet-29gbv"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109721    1528 topology_manager.go:215] "Topology Admit Handler" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kbhvv"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109954    1528 topology_manager.go:215] "Topology Admit Handler" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e" podNamespace="kube-system" podName="storage-provisioner"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110077    1528 topology_manager.go:215] "Topology Admit Handler" podUID="3191c71a-8c87-4390-8232-8653f494d1f0" podNamespace="default" podName="busybox-fc5497c4f-z28tq"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.110308    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110641    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-813300" podUID="f824b391-b3d2-49ec-ba7d-863cb2150f81"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.111896    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.115871    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.147565    1528 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.155423    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160314    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6dfedc3-d6ff-412c-8a13-40a493c4199e-tmp\") pod \"storage-provisioner\" (UID: \"f6dfedc3-d6ff-412c-8a13-40a493c4199e\") " pod="kube-system/storage-provisioner"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160428    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-cni-cfg\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-xtables-lock\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161224    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-xtables-lock\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:08.629862    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161359    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-lib-modules\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:08.630466    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162089    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.630515    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162182    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.662151031 +0000 UTC m=+6.753273950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.630595    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.162238    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-lib-modules\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:08.630595    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.175000    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-813300"
	I0610 12:32:08.630649    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.186991    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630649    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187290    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630750    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.687498638 +0000 UTC m=+6.778621457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.246331    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f80d01e953cc664fc05c397fdad000" path="/var/lib/kubelet/pods/93f80d01e953cc664fc05c397fdad000/volumes"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.248399    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa7bd9cfb361baaed8d7d5729a6c77c" path="/var/lib/kubelet/pods/baa7bd9cfb361baaed8d7d5729a6c77c/volumes"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.316426    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-813300" podStartSLOduration=0.316407314 podStartE2EDuration="316.407314ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.316147208 +0000 UTC m=+6.407270027" watchObservedRunningTime="2024-06-10 12:31:00.316407314 +0000 UTC m=+6.407530233"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.439081    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-813300" podStartSLOduration=0.439018164 podStartE2EDuration="439.018164ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.409703778 +0000 UTC m=+6.500826597" watchObservedRunningTime="2024-06-10 12:31:00.439018164 +0000 UTC m=+6.530141083"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.631684    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667882    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667966    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.667947638 +0000 UTC m=+7.759070557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769226    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769334    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769428    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.769408565 +0000 UTC m=+7.860531384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.231939    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.679952    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.680142    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.680120563 +0000 UTC m=+9.771243482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.781772    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782050    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.630781    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782132    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.7821123 +0000 UTC m=+9.873235219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631364    8536 command_runner.go:130] > Jun 10 12:31:02 multinode-813300 kubelet[1528]: E0610 12:31:02.234039    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.631419    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.232296    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.631419    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.701884    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.631529    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.702058    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.702037863 +0000 UTC m=+13.793160782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.631529    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802160    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631570    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802233    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631731    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802292    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.802272966 +0000 UTC m=+13.893395785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.207349    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.238069    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:05 multinode-813300 kubelet[1528]: E0610 12:31:05.232753    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:06 multinode-813300 kubelet[1528]: E0610 12:31:06.233804    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.231988    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736592    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736825    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.736801176 +0000 UTC m=+21.827923995 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837037    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.631791    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837146    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.632591    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837219    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.837199504 +0000 UTC m=+21.928322423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.632695    8536 command_runner.go:130] > Jun 10 12:31:08 multinode-813300 kubelet[1528]: E0610 12:31:08.232310    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.632695    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.208416    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.632695    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.231620    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.632774    8536 command_runner.go:130] > Jun 10 12:31:10 multinode-813300 kubelet[1528]: E0610 12:31:10.233882    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.632854    8536 command_runner.go:130] > Jun 10 12:31:11 multinode-813300 kubelet[1528]: E0610 12:31:11.232126    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.632933    8536 command_runner.go:130] > Jun 10 12:31:12 multinode-813300 kubelet[1528]: E0610 12:31:12.233695    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633011    8536 command_runner.go:130] > Jun 10 12:31:13 multinode-813300 kubelet[1528]: E0610 12:31:13.231660    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633136    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.210433    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.633204    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.234870    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633204    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.232790    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633204    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816637    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.633344    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816990    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.816931565 +0000 UTC m=+37.908054384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.633375    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918429    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.633405    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918619    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.633450    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918694    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.918675278 +0000 UTC m=+38.009798097 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.633538    8536 command_runner.go:130] > Jun 10 12:31:16 multinode-813300 kubelet[1528]: E0610 12:31:16.234954    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633538    8536 command_runner.go:130] > Jun 10 12:31:17 multinode-813300 kubelet[1528]: E0610 12:31:17.231668    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633620    8536 command_runner.go:130] > Jun 10 12:31:18 multinode-813300 kubelet[1528]: E0610 12:31:18.232656    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633620    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.214153    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.633697    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.231639    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633775    8536 command_runner.go:130] > Jun 10 12:31:20 multinode-813300 kubelet[1528]: E0610 12:31:20.234429    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633775    8536 command_runner.go:130] > Jun 10 12:31:21 multinode-813300 kubelet[1528]: E0610 12:31:21.232080    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633854    8536 command_runner.go:130] > Jun 10 12:31:22 multinode-813300 kubelet[1528]: E0610 12:31:22.232638    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.633854    8536 command_runner.go:130] > Jun 10 12:31:23 multinode-813300 kubelet[1528]: E0610 12:31:23.233105    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.633932    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.216593    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.633932    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.233280    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634011    8536 command_runner.go:130] > Jun 10 12:31:25 multinode-813300 kubelet[1528]: E0610 12:31:25.232513    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634089    8536 command_runner.go:130] > Jun 10 12:31:26 multinode-813300 kubelet[1528]: E0610 12:31:26.232337    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:27 multinode-813300 kubelet[1528]: E0610 12:31:27.233152    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:28 multinode-813300 kubelet[1528]: E0610 12:31:28.234103    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.218816    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.232070    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:30 multinode-813300 kubelet[1528]: E0610 12:31:30.231766    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.231673    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884791    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:08.634154    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884975    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.884956587 +0000 UTC m=+69.976079506 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:08.634691    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985181    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.634691    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985216    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.634691    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.98525598 +0000 UTC m=+70.076378799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:08.634691    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.232018    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:08.634837    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.476305    1528 scope.go:117] "RemoveContainer" containerID="d32ce22e31b06bacb7530f3513c1f864d77685269868404ad7c71a4f15d91e41"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.477175    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.477659    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f6dfedc3-d6ff-412c-8a13-40a493c4199e)\"" pod="kube-system/storage-provisioner" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:33 multinode-813300 kubelet[1528]: E0610 12:31:33.232631    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 kubelet[1528]: I0610 12:31:47.231895    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.214930    1528 scope.go:117] "RemoveContainer" containerID="34b9299d74e382eadb8e7df1029506efc87e283ac8b38024d9524b8bb815f705"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: E0610 12:31:54.266189    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:08.634872    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.275663    1528 scope.go:117] "RemoveContainer" containerID="ba52603f8387590319a4d5a9511265065e2f90bff6628bec2f622754e034c70a"
	I0610 12:32:08.678891    8536 logs.go:123] Gathering logs for coredns [f2e39052db19] ...
	I0610 12:32:08.678891    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e39052db19"
	I0610 12:32:08.714741    8536 command_runner.go:130] > .:53
	I0610 12:32:08.715066    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:08.715137    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:08.715137    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:08.715183    8536 command_runner.go:130] > [INFO] 127.0.0.1:46276 - 35337 "HINFO IN 965239639799927989.3587586823131848737. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.052340371s
	I0610 12:32:08.715264    8536 command_runner.go:130] > [INFO] 10.244.1.2:36040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003047s
	I0610 12:32:08.715264    8536 command_runner.go:130] > [INFO] 10.244.1.2:51901 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.165635405s
	I0610 12:32:08.715264    8536 command_runner.go:130] > [INFO] 10.244.1.2:38890 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.065664181s
	I0610 12:32:08.715264    8536 command_runner.go:130] > [INFO] 10.244.1.2:40219 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.107303974s
	I0610 12:32:08.715347    8536 command_runner.go:130] > [INFO] 10.244.0.3:38184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002396s
	I0610 12:32:08.715347    8536 command_runner.go:130] > [INFO] 10.244.0.3:57966 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001307s
	I0610 12:32:08.715408    8536 command_runner.go:130] > [INFO] 10.244.0.3:38338 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002131s
	I0610 12:32:08.715408    8536 command_runner.go:130] > [INFO] 10.244.0.3:41898 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000121s
	I0610 12:32:08.715487    8536 command_runner.go:130] > [INFO] 10.244.1.2:49043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200101s
	I0610 12:32:08.715544    8536 command_runner.go:130] > [INFO] 10.244.1.2:53918 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.147842886s
	I0610 12:32:08.715618    8536 command_runner.go:130] > [INFO] 10.244.1.2:50531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001726s
	I0610 12:32:08.715679    8536 command_runner.go:130] > [INFO] 10.244.1.2:41881 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001246s
	I0610 12:32:08.715727    8536 command_runner.go:130] > [INFO] 10.244.1.2:34708 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.030026838s
	I0610 12:32:08.715727    8536 command_runner.go:130] > [INFO] 10.244.1.2:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002834s
	I0610 12:32:08.715786    8536 command_runner.go:130] > [INFO] 10.244.1.2:58166 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001901s
	I0610 12:32:08.715786    8536 command_runner.go:130] > [INFO] 10.244.1.2:46174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001048s
	I0610 12:32:08.715786    8536 command_runner.go:130] > [INFO] 10.244.0.3:52212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003513s
	I0610 12:32:08.715864    8536 command_runner.go:130] > [INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	I0610 12:32:08.715926    8536 command_runner.go:130] > [INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	I0610 12:32:08.715926    8536 command_runner.go:130] > [INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	I0610 12:32:08.715991    8536 command_runner.go:130] > [INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	I0610 12:32:08.715991    8536 command_runner.go:130] > [INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	I0610 12:32:08.715991    8536 command_runner.go:130] > [INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	I0610 12:32:08.716052    8536 command_runner.go:130] > [INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	I0610 12:32:08.716052    8536 command_runner.go:130] > [INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	I0610 12:32:08.716052    8536 command_runner.go:130] > [INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	I0610 12:32:08.716129    8536 command_runner.go:130] > [INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	I0610 12:32:08.716129    8536 command_runner.go:130] > [INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	I0610 12:32:08.716129    8536 command_runner.go:130] > [INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	I0610 12:32:08.716190    8536 command_runner.go:130] > [INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	I0610 12:32:08.716190    8536 command_runner.go:130] > [INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	I0610 12:32:08.716256    8536 command_runner.go:130] > [INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	I0610 12:32:08.716256    8536 command_runner.go:130] > [INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	I0610 12:32:08.716316    8536 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	I0610 12:32:08.716316    8536 command_runner.go:130] > [INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	I0610 12:32:08.716395    8536 command_runner.go:130] > [INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	I0610 12:32:08.716395    8536 command_runner.go:130] > [INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	I0610 12:32:08.716444    8536 command_runner.go:130] > [INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	I0610 12:32:08.716528    8536 command_runner.go:130] > [INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	I0610 12:32:08.716568    8536 command_runner.go:130] > [INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	I0610 12:32:08.716568    8536 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0610 12:32:08.716617    8536 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0610 12:32:08.719098    8536 logs.go:123] Gathering logs for kube-controller-manager [f1409bf44ff1] ...
	I0610 12:32:08.719685    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1409bf44ff1"
	I0610 12:32:08.762120    8536 command_runner.go:130] ! I0610 12:07:55.502430       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:08.762559    8536 command_runner.go:130] ! I0610 12:07:56.114557       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:08.762619    8536 command_runner.go:130] ! I0610 12:07:56.114858       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:08.762687    8536 command_runner.go:130] ! I0610 12:07:56.117078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:08.762687    8536 command_runner.go:130] ! I0610 12:07:56.117365       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:08.762793    8536 command_runner.go:130] ! I0610 12:07:56.118392       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:08.762793    8536 command_runner.go:130] ! I0610 12:07:56.118623       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:08.762865    8536 command_runner.go:130] ! I0610 12:08:00.413505       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:08.762865    8536 command_runner.go:130] ! I0610 12:08:00.413532       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:08.762932    8536 command_runner.go:130] ! I0610 12:08:00.454038       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:08.762932    8536 command_runner.go:130] ! I0610 12:08:00.454303       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:08.763041    8536 command_runner.go:130] ! I0610 12:08:00.454341       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:08.763041    8536 command_runner.go:130] ! I0610 12:08:00.474947       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:08.763113    8536 command_runner.go:130] ! I0610 12:08:00.475105       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:08.763113    8536 command_runner.go:130] ! I0610 12:08:00.475116       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:08.763113    8536 command_runner.go:130] ! I0610 12:08:00.514703       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:08.763172    8536 command_runner.go:130] ! I0610 12:08:10.509914       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.510020       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.511115       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.511148       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.515475       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.515547       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.516308       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.516334       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.516340       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.531416       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.531434       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.531293       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.543960       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.544630       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.544667       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.567000       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.567602       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.568240       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.586627       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.587637       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.587654       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.623685       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.623975       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.624342       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.639985       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.640617       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.640810       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.702326       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.706246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:08.763230    8536 command_runner.go:130] ! I0610 12:08:10.711937       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:08.763776    8536 command_runner.go:130] ! I0610 12:08:10.712131       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:08.763776    8536 command_runner.go:130] ! I0610 12:08:10.712146       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:08.763840    8536 command_runner.go:130] ! I0610 12:08:10.712235       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:08.763840    8536 command_runner.go:130] ! I0610 12:08:10.712265       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:08.763917    8536 command_runner.go:130] ! I0610 12:08:10.724980       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:08.763971    8536 command_runner.go:130] ! I0610 12:08:10.726393       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:08.764020    8536 command_runner.go:130] ! I0610 12:08:10.726653       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:08.764086    8536 command_runner.go:130] ! I0610 12:08:10.742390       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:08.764134    8536 command_runner.go:130] ! I0610 12:08:10.743099       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:08.764195    8536 command_runner.go:130] ! I0610 12:08:10.744498       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:08.764195    8536 command_runner.go:130] ! I0610 12:08:10.759177       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:08.764262    8536 command_runner.go:130] ! I0610 12:08:10.759262       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:08.764262    8536 command_runner.go:130] ! I0610 12:08:10.759917       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:08.764323    8536 command_runner.go:130] ! I0610 12:08:10.759932       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:08.764389    8536 command_runner.go:130] ! I0610 12:08:10.901245       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:08.764451    8536 command_runner.go:130] ! I0610 12:08:10.903470       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:08.764451    8536 command_runner.go:130] ! I0610 12:08:10.903502       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:08.764516    8536 command_runner.go:130] ! I0610 12:08:11.064066       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:08.764576    8536 command_runner.go:130] ! I0610 12:08:11.064123       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:08.764576    8536 command_runner.go:130] ! I0610 12:08:11.064135       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:08.764642    8536 command_runner.go:130] ! I0610 12:08:11.202164       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:08.764705    8536 command_runner.go:130] ! I0610 12:08:11.202227       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:08.764769    8536 command_runner.go:130] ! I0610 12:08:11.202239       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:08.764834    8536 command_runner.go:130] ! I0610 12:08:11.352380       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:08.764892    8536 command_runner.go:130] ! I0610 12:08:11.352546       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:08.764941    8536 command_runner.go:130] ! I0610 12:08:11.352575       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:08.765000    8536 command_runner.go:130] ! I0610 12:08:11.656918       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:08.765000    8536 command_runner.go:130] ! I0610 12:08:11.657560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:08.765070    8536 command_runner.go:130] ! I0610 12:08:11.657950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:08.765153    8536 command_runner.go:130] ! I0610 12:08:11.658269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:08.765153    8536 command_runner.go:130] ! I0610 12:08:11.658437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:08.765262    8536 command_runner.go:130] ! I0610 12:08:11.658699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:08.765345    8536 command_runner.go:130] ! I0610 12:08:11.658785       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:08.765345    8536 command_runner.go:130] ! I0610 12:08:11.658822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:08.765405    8536 command_runner.go:130] ! I0610 12:08:11.658849       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:08.765530    8536 command_runner.go:130] ! I0610 12:08:11.658870       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:08.765530    8536 command_runner.go:130] ! I0610 12:08:11.658895       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:08.765610    8536 command_runner.go:130] ! I0610 12:08:11.658915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:08.765666    8536 command_runner.go:130] ! I0610 12:08:11.658950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:08.765800    8536 command_runner.go:130] ! I0610 12:08:11.658987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:08.765855    8536 command_runner.go:130] ! I0610 12:08:11.659004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:08.765855    8536 command_runner.go:130] ! I0610 12:08:11.659056       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:08.765938    8536 command_runner.go:130] ! W0610 12:08:11.659073       1 shared_informer.go:597] resyncPeriod 13h6m28.341601393s is smaller than resyncCheckPeriod 19h0m49.916968618s and the informer has already started. Changing it to 19h0m49.916968618s
	I0610 12:32:08.766005    8536 command_runner.go:130] ! I0610 12:08:11.659195       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:08.766094    8536 command_runner.go:130] ! I0610 12:08:11.659214       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:08.766153    8536 command_runner.go:130] ! I0610 12:08:11.659236       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:08.766206    8536 command_runner.go:130] ! I0610 12:08:11.659287       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:08.766255    8536 command_runner.go:130] ! I0610 12:08:11.659312       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:08.766315    8536 command_runner.go:130] ! I0610 12:08:11.659579       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:08.766359    8536 command_runner.go:130] ! I0610 12:08:11.659591       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:08.766396    8536 command_runner.go:130] ! I0610 12:08:11.659608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:08.766396    8536 command_runner.go:130] ! I0610 12:08:11.895313       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:08.766462    8536 command_runner.go:130] ! I0610 12:08:11.895383       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:08.766462    8536 command_runner.go:130] ! I0610 12:08:11.895693       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:08.766552    8536 command_runner.go:130] ! I0610 12:08:11.896490       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:08.766620    8536 command_runner.go:130] ! I0610 12:08:12.154521       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:08.766668    8536 command_runner.go:130] ! I0610 12:08:12.154576       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:08.766729    8536 command_runner.go:130] ! I0610 12:08:12.154658       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:08.766729    8536 command_runner.go:130] ! I0610 12:08:12.154690       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:08.766806    8536 command_runner.go:130] ! I0610 12:08:12.301351       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:08.766854    8536 command_runner.go:130] ! I0610 12:08:12.301495       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:08.766854    8536 command_runner.go:130] ! I0610 12:08:12.301508       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:08.766920    8536 command_runner.go:130] ! I0610 12:08:12.495309       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:08.766995    8536 command_runner.go:130] ! I0610 12:08:12.495425       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:08.766995    8536 command_runner.go:130] ! I0610 12:08:12.495645       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:08.766995    8536 command_runner.go:130] ! I0610 12:08:12.495683       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:08.767070    8536 command_runner.go:130] ! E0610 12:08:12.550245       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:08.767124    8536 command_runner.go:130] ! I0610 12:08:12.550671       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:08.767202    8536 command_runner.go:130] ! E0610 12:08:12.700493       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:08.767267    8536 command_runner.go:130] ! I0610 12:08:12.700528       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:08.767368    8536 command_runner.go:130] ! I0610 12:08:12.700538       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:08.767426    8536 command_runner.go:130] ! I0610 12:08:12.859280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:08.767426    8536 command_runner.go:130] ! I0610 12:08:12.859580       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:08.767490    8536 command_runner.go:130] ! I0610 12:08:12.859953       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:08.767547    8536 command_runner.go:130] ! I0610 12:08:12.906626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:08.767547    8536 command_runner.go:130] ! I0610 12:08:12.907724       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:08.767547    8536 command_runner.go:130] ! I0610 12:08:13.050431       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:08.767611    8536 command_runner.go:130] ! I0610 12:08:13.050510       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:08.767687    8536 command_runner.go:130] ! I0610 12:08:13.205885       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:08.767745    8536 command_runner.go:130] ! I0610 12:08:13.205970       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.205982       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.351713       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.351815       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.351830       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.603420       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.603498       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.603510       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.750262       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.750789       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.750809       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.900118       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.900639       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:13.900897       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.054008       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.054156       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.054170       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.199527       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.199627       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.199683       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.199694       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.351474       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.352193       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:08.767801    8536 command_runner.go:130] ! I0610 12:08:14.352213       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:08.768444    8536 command_runner.go:130] ! I0610 12:08:14.502148       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:08.768523    8536 command_runner.go:130] ! I0610 12:08:14.502250       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:08.768523    8536 command_runner.go:130] ! I0610 12:08:14.502262       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:08.768523    8536 command_runner.go:130] ! I0610 12:08:14.502696       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:08.768624    8536 command_runner.go:130] ! I0610 12:08:14.502825       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:08.768678    8536 command_runner.go:130] ! I0610 12:08:14.546684       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:08.768728    8536 command_runner.go:130] ! I0610 12:08:14.547077       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:08.768766    8536 command_runner.go:130] ! I0610 12:08:14.547608       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:08.768813    8536 command_runner.go:130] ! I0610 12:08:14.547097       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:08.768897    8536 command_runner.go:130] ! I0610 12:08:14.547127       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:08.768897    8536 command_runner.go:130] ! I0610 12:08:14.547931       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:08.768960    8536 command_runner.go:130] ! I0610 12:08:14.547138       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:08.769019    8536 command_runner.go:130] ! I0610 12:08:14.547188       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:08.769019    8536 command_runner.go:130] ! I0610 12:08:14.548434       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:08.769084    8536 command_runner.go:130] ! I0610 12:08:14.547199       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:08.769142    8536 command_runner.go:130] ! I0610 12:08:14.547257       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:08.769223    8536 command_runner.go:130] ! I0610 12:08:14.548692       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:08.769223    8536 command_runner.go:130] ! I0610 12:08:14.547265       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:08.769284    8536 command_runner.go:130] ! I0610 12:08:14.558628       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:08.769351    8536 command_runner.go:130] ! I0610 12:08:14.590023       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:08.769409    8536 command_runner.go:130] ! I0610 12:08:14.600506       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:08.769475    8536 command_runner.go:130] ! I0610 12:08:14.602694       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:08.769535    8536 command_runner.go:130] ! I0610 12:08:14.603324       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:08.769535    8536 command_runner.go:130] ! I0610 12:08:14.609611       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:08.769535    8536 command_runner.go:130] ! I0610 12:08:14.612038       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:08.769602    8536 command_runner.go:130] ! I0610 12:08:14.623629       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:08.769602    8536 command_runner.go:130] ! I0610 12:08:14.624495       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:08.769674    8536 command_runner.go:130] ! I0610 12:08:14.612329       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:08.769674    8536 command_runner.go:130] ! I0610 12:08:14.628289       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:08.769737    8536 command_runner.go:130] ! I0610 12:08:14.630516       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:08.769800    8536 command_runner.go:130] ! I0610 12:08:14.630648       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:08.769851    8536 command_runner.go:130] ! I0610 12:08:14.622860       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:08.769937    8536 command_runner.go:130] ! I0610 12:08:14.627541       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:08.769984    8536 command_runner.go:130] ! I0610 12:08:14.627554       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:08.770044    8536 command_runner.go:130] ! I0610 12:08:14.627562       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:08.770044    8536 command_runner.go:130] ! I0610 12:08:14.627813       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:08.770111    8536 command_runner.go:130] ! I0610 12:08:14.631141       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:08.770172    8536 command_runner.go:130] ! I0610 12:08:14.631364       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:08.770236    8536 command_runner.go:130] ! I0610 12:08:14.631669       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:08.770236    8536 command_runner.go:130] ! I0610 12:08:14.631834       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:08.770301    8536 command_runner.go:130] ! I0610 12:08:14.642451       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:08.770365    8536 command_runner.go:130] ! I0610 12:08:14.644828       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:08.770365    8536 command_runner.go:130] ! I0610 12:08:14.645380       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:08.770425    8536 command_runner.go:130] ! I0610 12:08:14.647678       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:08.770492    8536 command_runner.go:130] ! I0610 12:08:14.648798       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:08.770552    8536 command_runner.go:130] ! I0610 12:08:14.648809       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:08.770552    8536 command_runner.go:130] ! I0610 12:08:14.648848       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:08.770618    8536 command_runner.go:130] ! I0610 12:08:14.656075       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:08.770618    8536 command_runner.go:130] ! I0610 12:08:14.656781       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:08.770678    8536 command_runner.go:130] ! I0610 12:08:14.657449       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:08.770678    8536 command_runner.go:130] ! I0610 12:08:14.657643       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:08.770741    8536 command_runner.go:130] ! I0610 12:08:14.658125       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:08.770741    8536 command_runner.go:130] ! I0610 12:08:14.661079       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:08.770800    8536 command_runner.go:130] ! I0610 12:08:14.668926       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:08.770800    8536 command_runner.go:130] ! I0610 12:08:14.675620       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:08.770854    8536 command_runner.go:130] ! I0610 12:08:14.680953       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300" podCIDRs=["10.244.0.0/24"]
	I0610 12:32:08.770893    8536 command_runner.go:130] ! I0610 12:08:14.687842       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:08.770954    8536 command_runner.go:130] ! I0610 12:08:14.751377       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:08.770954    8536 command_runner.go:130] ! I0610 12:08:14.754827       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:08.771012    8536 command_runner.go:130] ! I0610 12:08:14.795731       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:08.771067    8536 command_runner.go:130] ! I0610 12:08:14.803976       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:08.771067    8536 command_runner.go:130] ! I0610 12:08:14.807376       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:08.771122    8536 command_runner.go:130] ! I0610 12:08:14.807800       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:08.771185    8536 command_runner.go:130] ! I0610 12:08:14.851108       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:08.771245    8536 command_runner.go:130] ! I0610 12:08:14.858915       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:08.771245    8536 command_runner.go:130] ! I0610 12:08:14.859692       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:14.864873       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:15.295934       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:15.296041       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:15.332772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:15.887603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="329.520484ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.024148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.478301ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.151441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.784808ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.151859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="288.402µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.577624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.03545ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.593339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.556101ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:16.593508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.3µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:30.535681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:30.566310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.4µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:32:08.771355    8536 command_runner.go:130] ! I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:32:08.771915    8536 command_runner.go:130] ! I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:32:08.771979    8536 command_runner.go:130] ! I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:32:08.771979    8536 command_runner.go:130] ! I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:32:08.771979    8536 command_runner.go:130] ! I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	I0610 12:32:08.772089    8536 command_runner.go:130] ! I0610 12:25:52.678571       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:25:52.681612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:25:52.698797       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m03" podCIDRs=["10.244.2.0/24"]
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:25:54.878967       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:26:13.380155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:27:44.944679       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:28:15.516170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.644756ms"
	I0610 12:32:08.772143    8536 command_runner.go:130] ! I0610 12:28:15.516815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.1µs"
	I0610 12:32:08.790122    8536 logs.go:123] Gathering logs for kindnet [c39d54960e7d] ...
	I0610 12:32:08.790122    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39d54960e7d"
	I0610 12:32:08.834300    8536 command_runner.go:130] ! I0610 12:12:45.866152       1 main.go:227] handling current node
	I0610 12:32:08.835323    8536 command_runner.go:130] ! I0610 12:12:45.866170       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835323    8536 command_runner.go:130] ! I0610 12:12:45.866178       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:12:55.883210       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:12:55.883426       1 main.go:227] handling current node
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:12:55.883562       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:12:55.883686       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835389    8536 command_runner.go:130] ! I0610 12:13:05.893577       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835505    8536 command_runner.go:130] ! I0610 12:13:05.893734       1 main.go:227] handling current node
	I0610 12:32:08.835505    8536 command_runner.go:130] ! I0610 12:13:05.893787       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835564    8536 command_runner.go:130] ! I0610 12:13:05.893797       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835564    8536 command_runner.go:130] ! I0610 12:13:15.902454       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835652    8536 command_runner.go:130] ! I0610 12:13:15.902590       1 main.go:227] handling current node
	I0610 12:32:08.835652    8536 command_runner.go:130] ! I0610 12:13:15.902606       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835717    8536 command_runner.go:130] ! I0610 12:13:15.902614       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835717    8536 command_runner.go:130] ! I0610 12:13:25.917172       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835779    8536 command_runner.go:130] ! I0610 12:13:25.917277       1 main.go:227] handling current node
	I0610 12:32:08.835779    8536 command_runner.go:130] ! I0610 12:13:25.917297       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835822    8536 command_runner.go:130] ! I0610 12:13:25.917305       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835822    8536 command_runner.go:130] ! I0610 12:13:35.933505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835822    8536 command_runner.go:130] ! I0610 12:13:35.933609       1 main.go:227] handling current node
	I0610 12:32:08.835822    8536 command_runner.go:130] ! I0610 12:13:35.933623       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.835896    8536 command_runner.go:130] ! I0610 12:13:35.933630       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.835896    8536 command_runner.go:130] ! I0610 12:13:45.943963       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.835975    8536 command_runner.go:130] ! I0610 12:13:45.944071       1 main.go:227] handling current node
	I0610 12:32:08.836022    8536 command_runner.go:130] ! I0610 12:13:45.944089       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836022    8536 command_runner.go:130] ! I0610 12:13:45.944114       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836107    8536 command_runner.go:130] ! I0610 12:13:55.953212       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836107    8536 command_runner.go:130] ! I0610 12:13:55.953354       1 main.go:227] handling current node
	I0610 12:32:08.836107    8536 command_runner.go:130] ! I0610 12:13:55.953371       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836189    8536 command_runner.go:130] ! I0610 12:13:55.953380       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836267    8536 command_runner.go:130] ! I0610 12:14:05.959968       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836267    8536 command_runner.go:130] ! I0610 12:14:05.960014       1 main.go:227] handling current node
	I0610 12:32:08.836267    8536 command_runner.go:130] ! I0610 12:14:05.960029       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836345    8536 command_runner.go:130] ! I0610 12:14:05.960036       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836400    8536 command_runner.go:130] ! I0610 12:14:15.970279       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836400    8536 command_runner.go:130] ! I0610 12:14:15.970375       1 main.go:227] handling current node
	I0610 12:32:08.836477    8536 command_runner.go:130] ! I0610 12:14:15.970391       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836533    8536 command_runner.go:130] ! I0610 12:14:15.970399       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836533    8536 command_runner.go:130] ! I0610 12:14:25.977769       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836533    8536 command_runner.go:130] ! I0610 12:14:25.977865       1 main.go:227] handling current node
	I0610 12:32:08.836610    8536 command_runner.go:130] ! I0610 12:14:25.977880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836665    8536 command_runner.go:130] ! I0610 12:14:25.977886       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836665    8536 command_runner.go:130] ! I0610 12:14:35.984527       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836714    8536 command_runner.go:130] ! I0610 12:14:35.984582       1 main.go:227] handling current node
	I0610 12:32:08.836758    8536 command_runner.go:130] ! I0610 12:14:35.984596       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836758    8536 command_runner.go:130] ! I0610 12:14:35.984604       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836758    8536 command_runner.go:130] ! I0610 12:14:46.000499       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836758    8536 command_runner.go:130] ! I0610 12:14:46.000612       1 main.go:227] handling current node
	I0610 12:32:08.836876    8536 command_runner.go:130] ! I0610 12:14:46.000635       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.836938    8536 command_runner.go:130] ! I0610 12:14:46.000650       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.836938    8536 command_runner.go:130] ! I0610 12:14:56.007468       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.836993    8536 command_runner.go:130] ! I0610 12:14:56.007626       1 main.go:227] handling current node
	I0610 12:32:08.836993    8536 command_runner.go:130] ! I0610 12:14:56.007642       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.837050    8536 command_runner.go:130] ! I0610 12:14:56.007651       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.837050    8536 command_runner.go:130] ! I0610 12:15:06.022181       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.837104    8536 command_runner.go:130] ! I0610 12:15:06.022286       1 main.go:227] handling current node
	I0610 12:32:08.837104    8536 command_runner.go:130] ! I0610 12:15:06.022302       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.837104    8536 command_runner.go:130] ! I0610 12:15:06.022312       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.837162    8536 command_runner.go:130] ! I0610 12:15:16.038901       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.837162    8536 command_runner.go:130] ! I0610 12:15:16.038992       1 main.go:227] handling current node
	I0610 12:32:08.837213    8536 command_runner.go:130] ! I0610 12:15:16.039008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.837213    8536 command_runner.go:130] ! I0610 12:15:16.039016       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.837517    8536 command_runner.go:130] ! I0610 12:15:26.062184       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.837579    8536 command_runner.go:130] ! I0610 12:15:26.062279       1 main.go:227] handling current node
	I0610 12:32:08.837672    8536 command_runner.go:130] ! I0610 12:15:26.062296       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.837743    8536 command_runner.go:130] ! I0610 12:15:26.062304       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.837850    8536 command_runner.go:130] ! I0610 12:15:36.071408       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.837850    8536 command_runner.go:130] ! I0610 12:15:36.071540       1 main.go:227] handling current node
	I0610 12:32:08.838075    8536 command_runner.go:130] ! I0610 12:15:36.071556       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838134    8536 command_runner.go:130] ! I0610 12:15:36.071564       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838134    8536 command_runner.go:130] ! I0610 12:15:46.078051       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838221    8536 command_runner.go:130] ! I0610 12:15:46.078158       1 main.go:227] handling current node
	I0610 12:32:08.838221    8536 command_runner.go:130] ! I0610 12:15:46.078176       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838280    8536 command_runner.go:130] ! I0610 12:15:46.078184       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838280    8536 command_runner.go:130] ! I0610 12:15:56.086545       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838341    8536 command_runner.go:130] ! I0610 12:15:56.086647       1 main.go:227] handling current node
	I0610 12:32:08.838341    8536 command_runner.go:130] ! I0610 12:15:56.086663       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838341    8536 command_runner.go:130] ! I0610 12:15:56.086671       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838407    8536 command_runner.go:130] ! I0610 12:16:06.094871       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838407    8536 command_runner.go:130] ! I0610 12:16:06.094920       1 main.go:227] handling current node
	I0610 12:32:08.838407    8536 command_runner.go:130] ! I0610 12:16:06.094935       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838498    8536 command_runner.go:130] ! I0610 12:16:06.094958       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838498    8536 command_runner.go:130] ! I0610 12:16:16.109713       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838498    8536 command_runner.go:130] ! I0610 12:16:16.110282       1 main.go:227] handling current node
	I0610 12:32:08.838561    8536 command_runner.go:130] ! I0610 12:16:16.110679       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838561    8536 command_runner.go:130] ! I0610 12:16:16.110879       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838561    8536 command_runner.go:130] ! I0610 12:16:26.124392       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.838619    8536 command_runner.go:130] ! I0610 12:16:26.124492       1 main.go:227] handling current node
	I0610 12:32:08.838619    8536 command_runner.go:130] ! I0610 12:16:26.124507       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.838619    8536 command_runner.go:130] ! I0610 12:16:26.124514       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.838619    8536 command_runner.go:130] ! I0610 12:16:36.130696       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840028    8536 command_runner.go:130] ! I0610 12:16:36.130864       1 main.go:227] handling current node
	I0610 12:32:08.840062    8536 command_runner.go:130] ! I0610 12:16:36.130880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840062    8536 command_runner.go:130] ! I0610 12:16:36.130888       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:46.145505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:46.145897       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:46.146067       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:46.146083       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:56.160466       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:56.160571       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:56.160586       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:16:56.160594       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:06.173930       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:06.173977       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:06.173992       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:06.173999       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:16.180797       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:16.180971       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:16.181005       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:16.181031       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:26.197081       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:26.197184       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:26.197201       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:26.197210       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:36.204586       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:36.204700       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:36.204716       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:36.204725       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:46.214904       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:46.215024       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:46.215040       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:46.215048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:56.228072       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:56.228173       1 main.go:227] handling current node
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:56.228189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840130    8536 command_runner.go:130] ! I0610 12:17:56.228197       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:06.237192       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:06.237303       1 main.go:227] handling current node
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:06.237329       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:06.237354       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:16.244574       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:16.244799       1 main.go:227] handling current node
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:16.244837       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:16.244863       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:26.258608       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:26.258654       1 main.go:227] handling current node
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:26.258669       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:26.258676       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840747    8536 command_runner.go:130] ! I0610 12:18:36.264620       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:36.264824       1 main.go:227] handling current node
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:36.264841       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:36.264850       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:46.275317       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:46.275426       1 main.go:227] handling current node
	I0610 12:32:08.840958    8536 command_runner.go:130] ! I0610 12:18:46.275460       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:46.275469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:56.290965       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:56.291027       1 main.go:227] handling current node
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:56.291041       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841046    8536 command_runner.go:130] ! I0610 12:18:56.291048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:06.298370       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:06.298512       1 main.go:227] handling current node
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:06.298529       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:06.298537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:16.309110       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:16.309215       1 main.go:227] handling current node
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:16.309232       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841130    8536 command_runner.go:130] ! I0610 12:19:16.309240       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:26.322583       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:26.322633       1 main.go:227] handling current node
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:26.322647       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:26.322654       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:36.336250       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:36.336376       1 main.go:227] handling current node
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:36.336392       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841212    8536 command_runner.go:130] ! I0610 12:19:36.336400       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:46.350996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:46.351137       1 main.go:227] handling current node
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:46.351155       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:46.351164       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:56.356996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841310    8536 command_runner.go:130] ! I0610 12:19:56.357039       1 main.go:227] handling current node
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:19:56.357052       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:19:56.357059       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:20:06.372114       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:20:06.372883       1 main.go:227] handling current node
	I0610 12:32:08.841411    8536 command_runner.go:130] ! I0610 12:20:06.373032       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841494    8536 command_runner.go:130] ! I0610 12:20:06.373062       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:16.381023       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:16.381690       1 main.go:227] handling current node
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:16.381940       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:16.381975       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:26.389178       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:26.389224       1 main.go:227] handling current node
	I0610 12:32:08.841577    8536 command_runner.go:130] ! I0610 12:20:26.389240       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841671    8536 command_runner.go:130] ! I0610 12:20:26.389247       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841671    8536 command_runner.go:130] ! I0610 12:20:36.395687       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841671    8536 command_runner.go:130] ! I0610 12:20:36.395828       1 main.go:227] handling current node
	I0610 12:32:08.841671    8536 command_runner.go:130] ! I0610 12:20:36.395844       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841751    8536 command_runner.go:130] ! I0610 12:20:36.395851       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841751    8536 command_runner.go:130] ! I0610 12:20:46.410656       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:46.410865       1 main.go:227] handling current node
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:46.410882       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:46.410891       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:56.425296       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:56.425540       1 main.go:227] handling current node
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:56.425625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:20:56.425639       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:21:06.439346       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:21:06.439393       1 main.go:227] handling current node
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:21:06.439406       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841815    8536 command_runner.go:130] ! I0610 12:21:06.439413       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841959    8536 command_runner.go:130] ! I0610 12:21:16.450424       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:16.450594       1 main.go:227] handling current node
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:16.450628       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:16.450821       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:26.458379       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:26.458487       1 main.go:227] handling current node
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:26.458503       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:26.458511       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:36.474243       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:36.474337       1 main.go:227] handling current node
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:36.474354       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:36.474362       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.841979    8536 command_runner.go:130] ! I0610 12:21:46.486635       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:46.486679       1 main.go:227] handling current node
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:46.486693       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:46.486700       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:56.502256       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:56.502361       1 main.go:227] handling current node
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:56.502377       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:21:56.502386       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:22:06.508796       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:22:06.508911       1 main.go:227] handling current node
	I0610 12:32:08.842210    8536 command_runner.go:130] ! I0610 12:22:06.508928       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842359    8536 command_runner.go:130] ! I0610 12:22:06.508957       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842359    8536 command_runner.go:130] ! I0610 12:22:16.523863       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842359    8536 command_runner.go:130] ! I0610 12:22:16.523952       1 main.go:227] handling current node
	I0610 12:32:08.842359    8536 command_runner.go:130] ! I0610 12:22:16.523970       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842428    8536 command_runner.go:130] ! I0610 12:22:16.523979       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842428    8536 command_runner.go:130] ! I0610 12:22:26.531516       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842428    8536 command_runner.go:130] ! I0610 12:22:26.531621       1 main.go:227] handling current node
	I0610 12:32:08.842491    8536 command_runner.go:130] ! I0610 12:22:26.531637       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842491    8536 command_runner.go:130] ! I0610 12:22:26.531645       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842491    8536 command_runner.go:130] ! I0610 12:22:36.546403       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842491    8536 command_runner.go:130] ! I0610 12:22:36.546510       1 main.go:227] handling current node
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:36.546525       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:36.546533       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:46.603429       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:46.603565       1 main.go:227] handling current node
	I0610 12:32:08.842554    8536 command_runner.go:130] ! I0610 12:22:46.603581       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842638    8536 command_runner.go:130] ! I0610 12:22:46.603590       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842660    8536 command_runner.go:130] ! I0610 12:22:56.619134       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842660    8536 command_runner.go:130] ! I0610 12:22:56.619253       1 main.go:227] handling current node
	I0610 12:32:08.842660    8536 command_runner.go:130] ! I0610 12:22:56.619287       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842660    8536 command_runner.go:130] ! I0610 12:22:56.619296       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842734    8536 command_runner.go:130] ! I0610 12:23:06.634307       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842734    8536 command_runner.go:130] ! I0610 12:23:06.634399       1 main.go:227] handling current node
	I0610 12:32:08.842734    8536 command_runner.go:130] ! I0610 12:23:06.634415       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842734    8536 command_runner.go:130] ! I0610 12:23:06.634424       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842796    8536 command_runner.go:130] ! I0610 12:23:16.649341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842796    8536 command_runner.go:130] ! I0610 12:23:16.649508       1 main.go:227] handling current node
	I0610 12:32:08.842796    8536 command_runner.go:130] ! I0610 12:23:16.649527       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842796    8536 command_runner.go:130] ! I0610 12:23:16.649539       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842864    8536 command_runner.go:130] ! I0610 12:23:26.662421       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842864    8536 command_runner.go:130] ! I0610 12:23:26.662451       1 main.go:227] handling current node
	I0610 12:32:08.842864    8536 command_runner.go:130] ! I0610 12:23:26.662462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842925    8536 command_runner.go:130] ! I0610 12:23:26.662468       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842925    8536 command_runner.go:130] ! I0610 12:23:36.669686       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842925    8536 command_runner.go:130] ! I0610 12:23:36.669734       1 main.go:227] handling current node
	I0610 12:32:08.842925    8536 command_runner.go:130] ! I0610 12:23:36.669822       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.842988    8536 command_runner.go:130] ! I0610 12:23:36.669831       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.842988    8536 command_runner.go:130] ! I0610 12:23:46.678078       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.842988    8536 command_runner.go:130] ! I0610 12:23:46.678194       1 main.go:227] handling current node
	I0610 12:32:08.843051    8536 command_runner.go:130] ! I0610 12:23:46.678209       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843051    8536 command_runner.go:130] ! I0610 12:23:46.678217       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843051    8536 command_runner.go:130] ! I0610 12:23:56.685841       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843051    8536 command_runner.go:130] ! I0610 12:23:56.685884       1 main.go:227] handling current node
	I0610 12:32:08.843114    8536 command_runner.go:130] ! I0610 12:23:56.685898       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843150    8536 command_runner.go:130] ! I0610 12:23:56.685905       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843168    8536 command_runner.go:130] ! I0610 12:24:06.692341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843168    8536 command_runner.go:130] ! I0610 12:24:06.692609       1 main.go:227] handling current node
	I0610 12:32:08.843194    8536 command_runner.go:130] ! I0610 12:24:06.692699       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843194    8536 command_runner.go:130] ! I0610 12:24:06.692856       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843254    8536 command_runner.go:130] ! I0610 12:24:16.700494       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:16.700609       1 main.go:227] handling current node
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:16.700625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:16.700633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:26.716495       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843285    8536 command_runner.go:130] ! I0610 12:24:26.716609       1 main.go:227] handling current node
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:26.716625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:26.716633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:36.723606       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:36.723716       1 main.go:227] handling current node
	I0610 12:32:08.843351    8536 command_runner.go:130] ! I0610 12:24:36.723733       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843420    8536 command_runner.go:130] ! I0610 12:24:36.724254       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843420    8536 command_runner.go:130] ! I0610 12:24:46.739916       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843420    8536 command_runner.go:130] ! I0610 12:24:46.740008       1 main.go:227] handling current node
	I0610 12:32:08.843487    8536 command_runner.go:130] ! I0610 12:24:46.740402       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843487    8536 command_runner.go:130] ! I0610 12:24:46.740432       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:24:56.759676       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:24:56.760848       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:24:56.760902       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:24:56.760914       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:06.771450       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:06.771514       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:06.771530       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:06.771537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:16.778338       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:16.778445       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:16.778461       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:16.778469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:26.791778       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:26.791933       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:26.791950       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:26.791974       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:36.800633       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:36.800842       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:36.800860       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:36.800869       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:46.815290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:46.815339       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:46.815355       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:46.815363       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.830374       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.830439       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.830471       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.830478       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.831222       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.831411       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:25:56.831494       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.840295       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.840446       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.840464       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.840913       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.845129       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:06.845329       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.860365       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.860476       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.860493       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.860502       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.861223       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:16.861379       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:26.873719       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:26.873964       1 main.go:227] handling current node
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:26.874016       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.843518    8536 command_runner.go:130] ! I0610 12:26:26.874181       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:26.874413       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:26.874451       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881254       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881366       1 main.go:227] handling current node
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881382       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881407       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881814       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:36.881908       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:46.900700       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844127    8536 command_runner.go:130] ! I0610 12:26:46.900797       1 main.go:227] handling current node
	I0610 12:32:08.844245    8536 command_runner.go:130] ! I0610 12:26:46.900815       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844245    8536 command_runner.go:130] ! I0610 12:26:46.900823       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844245    8536 command_runner.go:130] ! I0610 12:26:46.900956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:46.900985       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907395       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907412       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907420       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907548       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:26:56.907656       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922305       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922349       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922361       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922367       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922490       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:06.922515       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.929579       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.929687       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.929704       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.929712       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.930550       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:16.930641       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.944603       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.944719       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.944772       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.945138       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.945535       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:26.945625       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:36.955188       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:36.955329       1 main.go:227] handling current node
	I0610 12:32:08.844430    8536 command_runner.go:130] ! I0610 12:27:36.955462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.844977    8536 command_runner.go:130] ! I0610 12:27:36.955581       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.844977    8536 command_runner.go:130] ! I0610 12:27:36.955956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.845205    8536 command_runner.go:130] ! I0610 12:27:36.956158       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.845371    8536 command_runner.go:130] ! I0610 12:27:46.965590       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.965717       1 main.go:227] handling current node
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.965826       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.965836       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.966598       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:46.966708       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:56.999276       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:56.999553       1 main.go:227] handling current node
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:56.999711       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:56.999728       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:57.000088       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:27:57.000177       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015069       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015281       1 main.go:227] handling current node
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015300       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015308       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015707       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:08.845519    8536 command_runner.go:130] ! I0610 12:28:07.015928       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:08.865123    8536 logs.go:123] Gathering logs for describe nodes ...
	I0610 12:32:08.866108    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 12:32:09.099969    8536 command_runner.go:130] > Name:               multinode-813300
	I0610 12:32:09.099969    8536 command_runner.go:130] > Roles:              control-plane
	I0610 12:32:09.100040    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:09.100040    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:09.100040    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:09.100040    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300
	I0610 12:32:09.100081    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:09.100081    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:09.100118    8536 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0610 12:32:09.100184    8536 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0610 12:32:09.100184    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:09.100184    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:09.100184    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:09.100233    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	I0610 12:32:09.100277    8536 command_runner.go:130] > Taints:             <none>
	I0610 12:32:09.100277    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:09.100277    8536 command_runner.go:130] > Lease:
	I0610 12:32:09.100277    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300
	I0610 12:32:09.100277    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:09.100323    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:32:00 +0000
	I0610 12:32:09.100323    8536 command_runner.go:130] > Conditions:
	I0610 12:32:09.100361    8536 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0610 12:32:09.100402    8536 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0610 12:32:09.100402    8536 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0610 12:32:09.100402    8536 command_runner.go:130] >   DiskPressure     False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0610 12:32:09.100402    8536 command_runner.go:130] >   PIDPressure      False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0610 12:32:09.100402    8536 command_runner.go:130] >   Ready            True    Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:31:40 +0000   KubeletReady                 kubelet is posting ready status
	I0610 12:32:09.100402    8536 command_runner.go:130] > Addresses:
	I0610 12:32:09.100402    8536 command_runner.go:130] >   InternalIP:  172.17.150.144
	I0610 12:32:09.100402    8536 command_runner.go:130] >   Hostname:    multinode-813300
	I0610 12:32:09.100402    8536 command_runner.go:130] > Capacity:
	I0610 12:32:09.100402    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.100402    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.100550    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.100550    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.100550    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.100550    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:09.100550    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.100550    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.100550    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.100550    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.100550    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.100550    8536 command_runner.go:130] > System Info:
	I0610 12:32:09.100550    8536 command_runner.go:130] >   Machine ID:                 8363a852b0fa420a8dccb009e6f4f9c7
	I0610 12:32:09.100550    8536 command_runner.go:130] >   System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	I0610 12:32:09.100678    8536 command_runner.go:130] >   Boot ID:                    a60b688f-6b78-4fa5-b21e-96a64e5c1047
	I0610 12:32:09.100678    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:09.100719    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:09.100719    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:09.100719    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:09.100761    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:09.100761    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:09.100796    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:09.100810    8536 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0610 12:32:09.100835    8536 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0610 12:32:09.100864    8536 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0610 12:32:09.100864    8536 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:09.100915    8536 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0610 12:32:09.100957    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-z28tq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 etcd-multinode-813300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         69s
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kindnet-29gbv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-813300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-813300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kube-proxy-nrpvt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-813300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0610 12:32:09.100957    8536 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0610 12:32:09.100957    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:09.100957    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Resource           Requests     Limits
	I0610 12:32:09.100957    8536 command_runner.go:130] >   --------           --------     ------
	I0610 12:32:09.100957    8536 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0610 12:32:09.100957    8536 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0610 12:32:09.100957    8536 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0610 12:32:09.100957    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0610 12:32:09.100957    8536 command_runner.go:130] > Events:
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:09.100957    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-813300 status is now: NodeReady
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:09.100957    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:09.101544    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:09.101544    8536 command_runner.go:130] >   Normal  RegisteredNode           57s                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:09.101591    8536 command_runner.go:130] > Name:               multinode-813300-m02
	I0610 12:32:09.101591    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:09.101591    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m02
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:09.101591    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:09.101696    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:09.101739    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	I0610 12:32:09.101739    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:09.101739    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:09.101739    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:09.101792    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:09.101792    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	I0610 12:32:09.101826    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:09.101826    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:09.101857    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:09.101857    8536 command_runner.go:130] > Lease:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m02
	I0610 12:32:09.101857    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:09.101857    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:30 +0000
	I0610 12:32:09.101857    8536 command_runner.go:130] > Conditions:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:09.101857    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:09.101857    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.101857    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.101857    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.101857    8536 command_runner.go:130] > Addresses:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   InternalIP:  172.17.151.128
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Hostname:    multinode-813300-m02
	I0610 12:32:09.101857    8536 command_runner.go:130] > Capacity:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.101857    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.101857    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.101857    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.101857    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.101857    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.101857    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.101857    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.101857    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.101857    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.101857    8536 command_runner.go:130] > System Info:
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	I0610 12:32:09.101857    8536 command_runner.go:130] >   System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:09.101857    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:09.101857    8536 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0610 12:32:09.101857    8536 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0610 12:32:09.101857    8536 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0610 12:32:09.101857    8536 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:09.101857    8536 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0610 12:32:09.101857    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-czxmt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0610 12:32:09.101857    8536 command_runner.go:130] >   kube-system                 kindnet-r4nfq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0610 12:32:09.101857    8536 command_runner.go:130] >   kube-system                 kube-proxy-rx2b2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0610 12:32:09.102436    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:09.102436    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:09.102436    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:09.102436    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:09.102503    8536 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0610 12:32:09.102503    8536 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0610 12:32:09.102503    8536 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:09.102503    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:09.102503    8536 command_runner.go:130] > Events:
	I0610 12:32:09.102503    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:09.102503    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:09.102503    8536 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0610 12:32:09.102628    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	I0610 12:32:09.102628    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	I0610 12:32:09.102689    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	I0610 12:32:09.102689    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:09.102689    8536 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:09.102745    8536 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-813300-m02 status is now: NodeReady
	I0610 12:32:09.102812    8536 command_runner.go:130] >   Normal  NodeNotReady             3m54s              node-controller  Node multinode-813300-m02 status is now: NodeNotReady
	I0610 12:32:09.102812    8536 command_runner.go:130] >   Normal  RegisteredNode           57s                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:09.102812    8536 command_runner.go:130] > Name:               multinode-813300-m03
	I0610 12:32:09.102812    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:09.102812    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m03
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_25_53_0700
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:09.102812    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:09.102812    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:25:52 +0000
	I0610 12:32:09.102812    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:09.102812    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:09.102812    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:09.102812    8536 command_runner.go:130] > Lease:
	I0610 12:32:09.102812    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m03
	I0610 12:32:09.102812    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:09.102812    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:04 +0000
	I0610 12:32:09.102812    8536 command_runner.go:130] > Conditions:
	I0610 12:32:09.102812    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:09.103889    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:09.103923    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.103970    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.104005    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.104005    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:09.104005    8536 command_runner.go:130] > Addresses:
	I0610 12:32:09.104005    8536 command_runner.go:130] >   InternalIP:  172.17.144.46
	I0610 12:32:09.104048    8536 command_runner.go:130] >   Hostname:    multinode-813300-m03
	I0610 12:32:09.104048    8536 command_runner.go:130] > Capacity:
	I0610 12:32:09.104048    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.104048    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.104083    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.104083    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.104083    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.104083    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:09.104083    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:09.104083    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:09.104083    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:09.104135    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:09.104135    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:09.104135    8536 command_runner.go:130] > System Info:
	I0610 12:32:09.104135    8536 command_runner.go:130] >   Machine ID:                 2d60e1f6e3b2454db505a650eae61212
	I0610 12:32:09.104169    8536 command_runner.go:130] >   System UUID:                b38b4a9a-39f6-6f43-9e6d-19433dc62cd9
	I0610 12:32:09.104169    8536 command_runner.go:130] >   Boot ID:                    0a419483-5289-4d17-96c2-fd4487360412
	I0610 12:32:09.104169    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:09.104210    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:09.104210    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:09.104246    8536 command_runner.go:130] > PodCIDR:                      10.244.2.0/24
	I0610 12:32:09.104246    8536 command_runner.go:130] > PodCIDRs:                     10.244.2.0/24
	I0610 12:32:09.104246    8536 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:09.104246    8536 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0610 12:32:09.104246    8536 command_runner.go:130] >   kube-system                 kindnet-2pc4j       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m17s
	I0610 12:32:09.104246    8536 command_runner.go:130] >   kube-system                 kube-proxy-vw56h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	I0610 12:32:09.104246    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:09.104246    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:09.104246    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:09.104246    8536 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:09.104246    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:09.104246    8536 command_runner.go:130] > Events:
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0610 12:32:09.104246    8536 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  Starting                 6m4s                   kube-proxy       
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m17s (x2 over 6m17s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientMemory
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m17s (x2 over 6m17s)  kubelet          Node multinode-813300-m03 status is now: NodeHasNoDiskPressure
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m17s (x2 over 6m17s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientPID
	I0610 12:32:09.104246    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:09.104799    8536 command_runner.go:130] >   Normal  RegisteredNode           6m15s                  node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:09.104854    8536 command_runner.go:130] >   Normal  NodeReady                5m56s                  kubelet          Node multinode-813300-m03 status is now: NodeReady
	I0610 12:32:09.104854    8536 command_runner.go:130] >   Normal  NodeNotReady             4m25s                  node-controller  Node multinode-813300-m03 status is now: NodeNotReady
	I0610 12:32:09.104922    8536 command_runner.go:130] >   Normal  RegisteredNode           57s                    node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:09.115578    8536 logs.go:123] Gathering logs for kube-proxy [afad8b05897e] ...
	I0610 12:32:09.115578    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 afad8b05897e"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:09.145445    8536 command_runner.go:130] ! I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:09.146444    8536 command_runner.go:130] ! I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:09.148238    8536 logs.go:123] Gathering logs for kube-controller-manager [3bee53d5fef9] ...
	I0610 12:32:09.148238    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bee53d5fef9"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:56.976566       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.260708       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.260892       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.266101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.267393       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.268203       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:30:58.268377       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.430160       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.430459       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.456745       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.457409       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.457489       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.457839       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.509226       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.512712       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.512947       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.517463       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.520424       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.528150       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.528371       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.528506       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.528651       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.533407       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.543133       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.548293       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.548310       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:09.182318    8536 command_runner.go:130] ! I0610 12:31:01.548473       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:09.183300    8536 command_runner.go:130] ! I0610 12:31:01.548492       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:09.183300    8536 command_runner.go:130] ! I0610 12:31:01.548660       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:09.183353    8536 command_runner.go:130] ! I0610 12:31:01.548672       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:09.183353    8536 command_runner.go:130] ! I0610 12:31:01.595194       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:09.183353    8536 command_runner.go:130] ! I0610 12:31:01.595266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:09.183353    8536 command_runner.go:130] ! I0610 12:31:01.595295       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:09.183430    8536 command_runner.go:130] ! I0610 12:31:01.595320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:09.183479    8536 command_runner.go:130] ! I0610 12:31:01.595340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:09.183527    8536 command_runner.go:130] ! I0610 12:31:01.595360       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595402       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595465       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! W0610 12:31:01.595507       1 shared_informer.go:597] resyncPeriod 13h16m37.278540311s is smaller than resyncCheckPeriod 16h53m16.378760609s and the informer has already started. Changing it to 16h53m16.378760609s
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595706       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595754       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595782       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595923       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.595956       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597416       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597453       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597489       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597516       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597922       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.597937       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.598081       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.614277       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.614469       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.614504       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.618176       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.618586       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.618885       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.623374       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.624235       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.624265       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:09.183589    8536 command_runner.go:130] ! I0610 12:31:01.629921       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:09.184224    8536 command_runner.go:130] ! I0610 12:31:01.630154       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:09.184224    8536 command_runner.go:130] ! I0610 12:31:01.630164       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:09.184293    8536 command_runner.go:130] ! I0610 12:31:01.634130       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:09.184293    8536 command_runner.go:130] ! I0610 12:31:01.634452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:09.184293    8536 command_runner.go:130] ! I0610 12:31:01.634467       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:09.184293    8536 command_runner.go:130] ! I0610 12:31:01.639133       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:09.184385    8536 command_runner.go:130] ! I0610 12:31:01.639154       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:09.184429    8536 command_runner.go:130] ! I0610 12:31:01.639163       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:09.184429    8536 command_runner.go:130] ! I0610 12:31:01.639622       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:09.184429    8536 command_runner.go:130] ! I0610 12:31:01.639640       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:09.184479    8536 command_runner.go:130] ! I0610 12:31:01.643940       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:09.184479    8536 command_runner.go:130] ! I0610 12:31:01.644017       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:09.184531    8536 command_runner.go:130] ! I0610 12:31:01.644031       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:09.184531    8536 command_runner.go:130] ! I0610 12:31:01.652714       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:09.184531    8536 command_runner.go:130] ! I0610 12:31:01.657163       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:09.184579    8536 command_runner.go:130] ! I0610 12:31:01.657350       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:09.184579    8536 command_runner.go:130] ! E0610 12:31:01.664322       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:09.184640    8536 command_runner.go:130] ! I0610 12:31:01.664388       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:09.184698    8536 command_runner.go:130] ! I0610 12:31:01.694061       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:09.184732    8536 command_runner.go:130] ! I0610 12:31:01.694262       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:09.184732    8536 command_runner.go:130] ! I0610 12:31:01.694273       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:09.184782    8536 command_runner.go:130] ! I0610 12:31:01.722911       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:09.184815    8536 command_runner.go:130] ! I0610 12:31:01.725806       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:09.184815    8536 command_runner.go:130] ! I0610 12:31:01.726026       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:09.184844    8536 command_runner.go:130] ! I0610 12:31:01.734788       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:09.184880    8536 command_runner.go:130] ! I0610 12:31:01.735047       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:09.184921    8536 command_runner.go:130] ! I0610 12:31:01.735083       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:09.184921    8536 command_runner.go:130] ! I0610 12:31:01.759990       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:09.184960    8536 command_runner.go:130] ! I0610 12:31:01.761603       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.761772       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.769963       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.773525       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.773866       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.773998       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.778762       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.778803       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.778833       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.779416       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.779429       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.779447       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.780731       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.782261       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.783730       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.782277       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.782337       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.784928       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:01.782348       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.813253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.813374       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.813998       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.815397       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.818405       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.818514       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.819007       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.819350       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.821748       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.821802       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:09.185001    8536 command_runner.go:130] ! I0610 12:31:11.822113       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.822204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.822232       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.826332       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.826815       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:09.185631    8536 command_runner.go:130] ! I0610 12:31:11.826831       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:09.185747    8536 command_runner.go:130] ! E0610 12:31:11.830024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.830417       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.835752       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.836296       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.836330       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.839311       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.839512       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.839590       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.842028       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.842220       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.842603       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.842639       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.845940       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.846359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.846982       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.849897       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.850381       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.850613       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.853131       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.853418       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.853675       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.856318       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.856441       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.856643       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.856381       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.902405       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.903166       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.906707       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.907117       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:09.185747    8536 command_runner.go:130] ! I0610 12:31:11.907152       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.910144       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.910388       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.910498       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.913998       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.914276       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.915779       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.916916       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.917975       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.918292       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:09.186359    8536 command_runner.go:130] ! I0610 12:31:11.930523       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:09.186543    8536 command_runner.go:130] ! I0610 12:31:11.947621       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:09.186543    8536 command_runner.go:130] ! I0610 12:31:11.948394       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:09.186631    8536 command_runner.go:130] ! I0610 12:31:11.948768       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:09.186631    8536 command_runner.go:130] ! I0610 12:31:11.954911       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:09.186631    8536 command_runner.go:130] ! I0610 12:31:11.957486       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.963420       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.973610       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.979167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.980674       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.984963       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.985188       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:11.994612       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.003389       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.007898       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.011185       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.013303       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.014815       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.016632       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.016812       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.016947       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.017245       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.017927       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.018270       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.019668       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.019818       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:09.186727    8536 command_runner.go:130] ! I0610 12:31:12.023667       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.024171       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.025888       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.026414       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.026742       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.026899       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.031613       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.035671       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.038980       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.040498       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.044612       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.044983       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.048630       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.048809       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.050934       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.051748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.77596ms"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.058669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.911µs"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.061957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.647762ms"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.062771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="326.05µs"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.074892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.074973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.075004       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.075594       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.130853       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.140823       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.147492       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.174418       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.201305       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:09.187131    8536 command_runner.go:130] ! I0610 12:31:12.218626       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:12.243193       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:12.658052       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:12.658432       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:12.674720       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:31:42.085794       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.626500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.481917ms"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.626834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.891µs"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.653330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="217.376µs"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.704393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.856077ms"
	I0610 12:32:09.187615    8536 command_runner.go:130] ! I0610 12:32:06.705453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.995µs"
	I0610 12:32:09.204030    8536 logs.go:123] Gathering logs for container status ...
	I0610 12:32:09.204030    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 12:32:09.281479    8536 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0610 12:32:09.282057    8536 command_runner.go:130] > b9550940a81ca       8c811b4aec35f                                                                                         5 seconds ago        Running             busybox                   1                   c4d124cebb3b3       busybox-fc5497c4f-z28tq
	I0610 12:32:09.282057    8536 command_runner.go:130] > 24f3f7e041f98       cbb01a7bd410d                                                                                         5 seconds ago        Running             coredns                   1                   241c4811748fa       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:09.282105    8536 command_runner.go:130] > e934ffe0f9032       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   2dd9b423841c9       storage-provisioner
	I0610 12:32:09.282105    8536 command_runner.go:130] > c3c4316beca64       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   0c19b39e15f6a       kindnet-29gbv
	I0610 12:32:09.282173    8536 command_runner.go:130] > cc9dbe4aa4005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   2dd9b423841c9       storage-provisioner
	I0610 12:32:09.282214    8536 command_runner.go:130] > 1de5fa0ef8384       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   06d997d7c306c       kube-proxy-nrpvt
	I0610 12:32:09.282214    8536 command_runner.go:130] > d7941126134f2       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   5c3da3b59b527       kube-apiserver-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > 877ee07c14997       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   b13c0058ce265       etcd-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > d90e72ef46704       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   8902dac03acbc       kube-scheduler-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > 3bee53d5fef91       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   f56cc8af37db0       kube-controller-manager-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > 91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	I0610 12:32:09.282214    8536 command_runner.go:130] > f2e39052db195       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:09.282214    8536 command_runner.go:130] > c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	I0610 12:32:09.282214    8536 command_runner.go:130] > afad8b05897e5       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	I0610 12:32:09.282214    8536 command_runner.go:130] > bd1a6cd987430       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	I0610 12:32:09.282214    8536 command_runner.go:130] > f1409bf44ff14       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	I0610 12:32:09.286325    8536 logs.go:123] Gathering logs for dmesg ...
	I0610 12:32:09.286325    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 12:32:09.313305    8536 command_runner.go:130] > [Jun10 12:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.132459] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.024371] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.082449] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0610 12:32:09.313871    8536 command_runner.go:130] > [  +0.022513] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0610 12:32:09.313871    8536 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +5.764981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +1.334692] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +1.227872] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +7.275008] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0610 12:32:09.314038    8536 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0610 12:32:09.314038    8536 command_runner.go:130] > [Jun10 12:30] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.213819] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [ +29.247267] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.109477] kauditd_printk_skb: 73 callbacks suppressed
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.638576] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.214581] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +0.255487] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0610 12:32:09.314137    8536 command_runner.go:130] > [  +3.027967] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.239865] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.216732] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.314976] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.112938] kauditd_printk_skb: 183 callbacks suppressed
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.871081] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +5.053506] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.123809] kauditd_printk_skb: 34 callbacks suppressed
	I0610 12:32:09.314244    8536 command_runner.go:130] > [Jun10 12:31] kauditd_printk_skb: 62 callbacks suppressed
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +3.513215] hrtimer: interrupt took 368589 ns
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +0.107277] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0610 12:32:09.314244    8536 command_runner.go:130] > [  +7.541664] kauditd_printk_skb: 70 callbacks suppressed
	I0610 12:32:09.316399    8536 logs.go:123] Gathering logs for kindnet [c3c4316beca6] ...
	I0610 12:32:09.316475    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c4316beca6"
	I0610 12:32:09.350491    8536 command_runner.go:130] ! I0610 12:31:02.264969       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 12:32:09.350491    8536 command_runner.go:130] ! I0610 12:31:02.265572       1 main.go:107] hostIP = 172.17.150.144
	I0610 12:32:09.350491    8536 command_runner.go:130] ! podIP = 172.17.150.144
	I0610 12:32:09.350491    8536 command_runner.go:130] ! I0610 12:31:02.265708       1 main.go:116] setting mtu 1500 for CNI 
	I0610 12:32:09.350491    8536 command_runner.go:130] ! I0610 12:31:02.265761       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 12:32:09.351425    8536 command_runner.go:130] ! I0610 12:31:02.265778       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 12:32:09.351425    8536 command_runner.go:130] ! I0610 12:31:32.684223       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0610 12:32:09.351483    8536 command_runner.go:130] ! I0610 12:31:32.703397       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:09.351483    8536 command_runner.go:130] ! I0610 12:31:32.703595       1 main.go:227] handling current node
	I0610 12:32:09.351483    8536 command_runner.go:130] ! I0610 12:31:32.742189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:09.351530    8536 command_runner.go:130] ! I0610 12:31:32.742230       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:09.351564    8536 command_runner.go:130] ! I0610 12:31:32.742783       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.151.128 Flags: [] Table: 0} 
	I0610 12:32:09.351586    8536 command_runner.go:130] ! I0610 12:31:32.743097       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:09.351586    8536 command_runner.go:130] ! I0610 12:31:32.743120       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:09.351641    8536 command_runner.go:130] ! I0610 12:31:32.743193       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750326       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750472       1 main.go:227] handling current node
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750487       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750494       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750648       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:42.750678       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767023       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767174       1 main.go:227] handling current node
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767191       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767199       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767842       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:31:52.767929       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.782886       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.782992       1 main.go:227] handling current node
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.783008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.783073       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.783951       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:09.351697    8536 command_runner.go:130] ! I0610 12:32:02.784044       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:09.354834    8536 logs.go:123] Gathering logs for Docker ...
	I0610 12:32:09.354834    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 12:32:09.394636    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:09.394738    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:09.394837    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:09.394837    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:09.394896    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:09.394954    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:09.395023    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:09.395023    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.395083    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:09.395285    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.395348    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:09.395348    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:09.395416    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:09.395491    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:09.395491    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:09.395625    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:09.395625    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:09.395691    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.395747    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0610 12:32:09.395747    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.395809    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:09.395866    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:09.395927    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:09.395984    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:09.395984    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:09.396048    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:09.396048    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:09.396107    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.396172    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0610 12:32:09.396172    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.396247    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0610 12:32:09.396247    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:09.396306    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.396355    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:09.396428    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.665734294Z" level=info msg="Starting up"
	I0610 12:32:09.396491    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.666799026Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:09.396547    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.668025832Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0610 12:32:09.396611    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.707077561Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:09.396668    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745342414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:09.396668    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745425201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:09.396730    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745528085Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:09.396835    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745580077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.396880    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746319960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.396943    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746463837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397006    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746722696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.397063    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746775088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397127    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746796184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:09.397187    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746813182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397187    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.747203320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397251    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.748049086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397309    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752393000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.397370    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752519780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.397507    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752692453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.397588    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752790737Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:09.397588    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753305956Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:09.397733    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753420338Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:09.397798    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753439135Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:09.397798    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759080243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:09.397866    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759316106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:09.397931    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759347801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:09.397996    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759374497Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:09.398102    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759392594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:09.398168    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759476281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:09.398228    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759928509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:09.398314    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760128877Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:09.398356    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760824467Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:09.398405    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760850663Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:09.398491    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760867361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398575    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760883758Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398636    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760898556Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398636    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760914553Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398716    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760935350Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398771    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760951047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398889    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760966645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398889    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760986442Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.398953    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761064230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399009    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761105323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399071    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761128319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399177    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761143417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399224    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761158215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399224    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761173012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399297    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761187310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399358    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761210007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399413    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761455768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399477    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761477764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399535    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761493962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399535    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761507660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399622    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761522057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399697    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761538755Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:09.399753    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761561351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399753    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761583448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.399816    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761598445Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:09.399873    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761652437Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:09.399990    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761676833Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:09.400055    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761691230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:09.400122    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761709928Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:09.400242    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761721526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.400287    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761735324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:09.400343    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761752021Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:09.400406    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762164056Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:09.400462    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762290536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:09.400524    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762532698Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:09.400585    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762557794Z" level=info msg="containerd successfully booted in 0.059804s"
	I0610 12:32:09.400639    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.723660372Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:09.400639    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.979070633Z" level=info msg="Loading containers: start."
	I0610 12:32:09.400705    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.430556665Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:09.400765    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.525359393Z" level=info msg="Loading containers: done."
	I0610 12:32:09.400819    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.555368825Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:09.400919    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.556499190Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:09.400977    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614621979Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:09.400977    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614710469Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:09.401043    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:09.401043    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.105858304Z" level=info msg="Processing signal 'terminated'"
	I0610 12:32:09.401100    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.107858244Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0610 12:32:09.401163    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 systemd[1]: Stopping Docker Application Container Engine...
	I0610 12:32:09.401218    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109274172Z" level=info msg="Daemon shutdown complete"
	I0610 12:32:09.401280    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109439076Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0610 12:32:09.401353    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109591179Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0610 12:32:09.401414    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: docker.service: Deactivated successfully.
	I0610 12:32:09.401414    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Stopped Docker Application Container Engine.
	I0610 12:32:09.401478    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:09.401530    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.200932485Z" level=info msg="Starting up"
	I0610 12:32:09.401943    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.202989526Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:09.401943    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.204789062Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0610 12:32:09.402075    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.250167169Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:09.402164    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291799101Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:09.402215    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291856902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:09.402268    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291930003Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:09.402268    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291948904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402268    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291983304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291997405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292182308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292287811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292310511Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292322911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292350212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292701119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.295953884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296063086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296411793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296455694Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296587396Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296721299Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296741600Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296941504Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297027105Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297046206Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297078906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297254610Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297334111Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297955024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:09.402450    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298031825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298071126Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298090126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298105527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298120527Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.402994    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298155728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298172828Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298189828Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298204229Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298218329Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298230929Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298260030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298281530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298296531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298318131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298333531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298494735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298514735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298529635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298592837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298610037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298624437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298639137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298652438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298669738Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298693539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298708139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298720839Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298773440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298792441Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298805041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298820841Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298832741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298850742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298862942Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299109447Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299202249Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299272150Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299312051Z" level=info msg="containerd successfully booted in 0.052836s"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.253253712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.287070988Z" level=info msg="Loading containers: start."
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.612574192Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.704084520Z" level=info msg="Loading containers: done."
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733112200Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733256003Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.788468006Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.790252742Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Loaded network plugin cni"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0610 12:32:09.403221    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start cri-dockerd grpc backend"
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0610 12:32:09.404240    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-kbhvv_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c\""
	I0610 12:32:09.404416    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-z28tq_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d\""
	I0610 12:32:09.406780    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013449453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.406909    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013587556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.406959    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013608856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407041    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013775860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407092    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.087769538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407092    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089579074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407150    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089879880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407202    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.090133785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407257    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183156944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407310    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183215145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407366    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183227346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407366    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183318447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407417    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f56cc8af37db0f3fea8de363d927c6924c7ad7e81f4908f6f5c87d6c0db17a61/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.407470    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244245765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407521    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244411968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407591    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244427968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244593672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8902dac03acbce14b7e106bff482e591dd574972082943e9adda30969716a707/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b13c0058ce265f3c4b18ec59cbb42b72803807a8d96330756114b2526fffa2de/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611175897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611296299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611337700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.612109315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730665784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730725385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730738886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730907689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848373736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848822145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851216993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851612501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900274973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.407658    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900404876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.408698    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900419576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.408698    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900508378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.408698    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:59Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0610 12:32:09.408838    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830014876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.408998    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830867993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.408998    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831086098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409057    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831510106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409057    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854754571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409057    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854918174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409147    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.857723530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409184    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.858668949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409184    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.877394923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409256    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878360042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409256    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878507645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409301    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.879086357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409301    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d997d7c306c2a08fab9e0e53bd14a9da495d8b0abdad38c9935489b788eccd/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.409365    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2dd9b423841c9fee92dc2a884fe8f45fb9dd5b8713214ce8804ac8ced10629d1/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.409398    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337790622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409398    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337963526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409398    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337992226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409398    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.338102629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409526    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.394005846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409560    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396505296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409560    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396667999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409607    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396999105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409640    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c19b39e15f6ae82627ffedaf799ef63dd09554d65260dbfc8856b08a4ce7354/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.409640    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.711733694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.409690    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712144402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.409723    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712256705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409813    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712964519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.409813    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.980963328Z" level=info msg="shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:09.409861    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981035932Z" level=warning msg="cleaning up after shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:09.409861    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981047633Z" level=info msg="cleaning up dead shim" namespace=moby
	I0610 12:32:09.409893    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1052]: time="2024-06-10T12:31:31.981399154Z" level=info msg="ignoring event" container=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0610 12:32:09.410009    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.486941957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410009    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487165464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410062    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487187665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410096    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.488142597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410096    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345354892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410174    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345592698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410174    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345620799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410225    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345913706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410275    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.511059667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410275    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512286197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410316    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512437501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410350    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512775109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410391    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241c4811748facbb85003522d513039c3dfc5b38006b7f1cba90a5e411055e97/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:09.410425    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4d124cebb3b3affe7ace090f1a152544207db26621b5b4098cad87e3db47a4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0610 12:32:09.410466    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955148547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410466    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955266050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410499    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955283650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410540    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955812861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410581    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444723816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:09.410622    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444892597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:09.410655    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444914895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:09.410705    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.445846695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:11.966953    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:32:12.000238    8536 command_runner.go:130] > 1892
	I0610 12:32:12.000238    8536 api_server.go:72] duration metric: took 1m7.4789712s to wait for apiserver process to appear ...
	I0610 12:32:12.000238    8536 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:32:12.010491    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 12:32:12.040772    8536 command_runner.go:130] > d7941126134f
	I0610 12:32:12.040772    8536 logs.go:276] 1 containers: [d7941126134f]
	I0610 12:32:12.049441    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 12:32:12.078487    8536 command_runner.go:130] > 877ee07c1499
	I0610 12:32:12.078487    8536 logs.go:276] 1 containers: [877ee07c1499]
	I0610 12:32:12.087877    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 12:32:12.114066    8536 command_runner.go:130] > 24f3f7e041f9
	I0610 12:32:12.114612    8536 command_runner.go:130] > f2e39052db19
	I0610 12:32:12.114680    8536 logs.go:276] 2 containers: [24f3f7e041f9 f2e39052db19]
	I0610 12:32:12.123355    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 12:32:12.156483    8536 command_runner.go:130] > d90e72ef4670
	I0610 12:32:12.156483    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:32:12.156483    8536 logs.go:276] 2 containers: [d90e72ef4670 bd1a6cd98743]
	I0610 12:32:12.166208    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 12:32:12.192177    8536 command_runner.go:130] > 1de5fa0ef838
	I0610 12:32:12.192177    8536 command_runner.go:130] > afad8b05897e
	I0610 12:32:12.192177    8536 logs.go:276] 2 containers: [1de5fa0ef838 afad8b05897e]
	I0610 12:32:12.202221    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 12:32:12.224741    8536 command_runner.go:130] > 3bee53d5fef9
	I0610 12:32:12.225760    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:32:12.228048    8536 logs.go:276] 2 containers: [3bee53d5fef9 f1409bf44ff1]
	I0610 12:32:12.237371    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 12:32:12.265667    8536 command_runner.go:130] > c3c4316beca6
	I0610 12:32:12.265667    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:32:12.265667    8536 logs.go:276] 2 containers: [c3c4316beca6 c39d54960e7d]
	I0610 12:32:12.265667    8536 logs.go:123] Gathering logs for kube-scheduler [bd1a6cd98743] ...
	I0610 12:32:12.265667    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1a6cd98743"
	I0610 12:32:12.295683    8536 command_runner.go:130] ! I0610 12:07:55.711360       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:12.296495    8536 command_runner.go:130] ! W0610 12:07:57.417322       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:12.296495    8536 command_runner.go:130] ! W0610 12:07:57.417963       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.296630    8536 command_runner.go:130] ! W0610 12:07:57.418046       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:12.296660    8536 command_runner.go:130] ! W0610 12:07:57.418071       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:12.296660    8536 command_runner.go:130] ! I0610 12:07:57.459055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:12.296660    8536 command_runner.go:130] ! I0610 12:07:57.460659       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:12.296731    8536 command_runner.go:130] ! I0610 12:07:57.464904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:12.296731    8536 command_runner.go:130] ! I0610 12:07:57.464952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:12.296731    8536 command_runner.go:130] ! I0610 12:07:57.466483       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:12.296803    8536 command_runner.go:130] ! I0610 12:07:57.466650       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:12.296803    8536 command_runner.go:130] ! W0610 12:07:57.502453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:12.296875    8536 command_runner.go:130] ! E0610 12:07:57.507264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:12.296875    8536 command_runner.go:130] ! W0610 12:07:57.503672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.296941    8536 command_runner.go:130] ! W0610 12:07:57.506076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:12.296941    8536 command_runner.go:130] ! W0610 12:07:57.506243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297028    8536 command_runner.go:130] ! W0610 12:07:57.506320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:12.297051    8536 command_runner.go:130] ! W0610 12:07:57.506362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297082    8536 command_runner.go:130] ! W0610 12:07:57.506402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:12.297082    8536 command_runner.go:130] ! W0610 12:07:57.506651       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.297151    8536 command_runner.go:130] ! W0610 12:07:57.506722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:12.297151    8536 command_runner.go:130] ! W0610 12:07:57.507113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297284    8536 command_runner.go:130] ! W0610 12:07:57.507193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:12.297284    8536 command_runner.go:130] ! E0610 12:07:57.511548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297284    8536 command_runner.go:130] ! E0610 12:07:57.511795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:12.297371    8536 command_runner.go:130] ! E0610 12:07:57.512240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297371    8536 command_runner.go:130] ! E0610 12:07:57.512647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:12.297371    8536 command_runner.go:130] ! E0610 12:07:57.515128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297455    8536 command_runner.go:130] ! E0610 12:07:57.515218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:12.297455    8536 command_runner.go:130] ! E0610 12:07:57.515698       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.516017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.516332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.516529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:57.537276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.537491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:57.537680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.538611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:57.537622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.538734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:57.538013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:57.539237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:58.345815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! E0610 12:07:58.345914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:12.297576    8536 command_runner.go:130] ! W0610 12:07:58.356843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:12.298174    8536 command_runner.go:130] ! E0610 12:07:58.357045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.406587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.406863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.426795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:12.298687    8536 command_runner.go:130] ! E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:12.299216    8536 command_runner.go:130] ! W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:12.299278    8536 command_runner.go:130] ! E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:12.299361    8536 command_runner.go:130] ! W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:12.299394    8536 command_runner.go:130] ! E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:12.299423    8536 command_runner.go:130] ! W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.299486    8536 command_runner.go:130] ! E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:12.299486    8536 command_runner.go:130] ! W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:12.299551    8536 command_runner.go:130] ! E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:12.299551    8536 command_runner.go:130] ! W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:12.299607    8536 command_runner.go:130] ! E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:12.299642    8536 command_runner.go:130] ! I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:12.299674    8536 command_runner.go:130] ! E0610 12:28:16.298586       1 run.go:74] "command failed" err="finished without leader elect"
	I0610 12:32:12.311166    8536 logs.go:123] Gathering logs for kube-proxy [1de5fa0ef838] ...
	I0610 12:32:12.311166    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1de5fa0ef838"
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.254962       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.294630       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.150.144"]
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.403290       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.403338       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:12.341154    8536 command_runner.go:130] ! I0610 12:31:02.403357       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:12.342150    8536 command_runner.go:130] ! I0610 12:31:02.416009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:12.342172    8536 command_runner.go:130] ! I0610 12:31:02.416300       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:12.342172    8536 command_runner.go:130] ! I0610 12:31:02.416345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:12.342172    8536 command_runner.go:130] ! I0610 12:31:02.424657       1 config.go:192] "Starting service config controller"
	I0610 12:32:12.342172    8536 command_runner.go:130] ! I0610 12:31:02.425325       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:12.342233    8536 command_runner.go:130] ! I0610 12:31:02.425369       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:12.342259    8536 command_runner.go:130] ! I0610 12:31:02.425382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:12.342259    8536 command_runner.go:130] ! I0610 12:31:02.432037       1 config.go:319] "Starting node config controller"
	I0610 12:32:12.342259    8536 command_runner.go:130] ! I0610 12:31:02.432075       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:12.342259    8536 command_runner.go:130] ! I0610 12:31:02.535663       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:12.342317    8536 command_runner.go:130] ! I0610 12:31:02.535744       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:12.342317    8536 command_runner.go:130] ! I0610 12:31:02.535786       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:12.344172    8536 logs.go:123] Gathering logs for kindnet [c39d54960e7d] ...
	I0610 12:32:12.344172    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39d54960e7d"
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:45.866152       1 main.go:227] handling current node
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:45.866170       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:45.866178       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:55.883210       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:55.883426       1 main.go:227] handling current node
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:55.883562       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:12:55.883686       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.378199    8536 command_runner.go:130] ! I0610 12:13:05.893577       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379133    8536 command_runner.go:130] ! I0610 12:13:05.893734       1 main.go:227] handling current node
	I0610 12:32:12.379133    8536 command_runner.go:130] ! I0610 12:13:05.893787       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379133    8536 command_runner.go:130] ! I0610 12:13:05.893797       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379133    8536 command_runner.go:130] ! I0610 12:13:15.902454       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379190    8536 command_runner.go:130] ! I0610 12:13:15.902590       1 main.go:227] handling current node
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:15.902606       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:15.902614       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:25.917172       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:25.917277       1 main.go:227] handling current node
	I0610 12:32:12.379226    8536 command_runner.go:130] ! I0610 12:13:25.917297       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379299    8536 command_runner.go:130] ! I0610 12:13:25.917305       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379299    8536 command_runner.go:130] ! I0610 12:13:35.933505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379327    8536 command_runner.go:130] ! I0610 12:13:35.933609       1 main.go:227] handling current node
	I0610 12:32:12.379327    8536 command_runner.go:130] ! I0610 12:13:35.933623       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379327    8536 command_runner.go:130] ! I0610 12:13:35.933630       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379327    8536 command_runner.go:130] ! I0610 12:13:45.943963       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379392    8536 command_runner.go:130] ! I0610 12:13:45.944071       1 main.go:227] handling current node
	I0610 12:32:12.379392    8536 command_runner.go:130] ! I0610 12:13:45.944089       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379392    8536 command_runner.go:130] ! I0610 12:13:45.944114       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379392    8536 command_runner.go:130] ! I0610 12:13:55.953212       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379453    8536 command_runner.go:130] ! I0610 12:13:55.953354       1 main.go:227] handling current node
	I0610 12:32:12.379453    8536 command_runner.go:130] ! I0610 12:13:55.953371       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379478    8536 command_runner.go:130] ! I0610 12:13:55.953380       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379478    8536 command_runner.go:130] ! I0610 12:14:05.959968       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:05.960014       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:05.960029       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:05.960036       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:15.970279       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:15.970375       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:15.970391       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:15.970399       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:25.977769       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:25.977865       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:25.977880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:25.977886       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:35.984527       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:35.984582       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:35.984596       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:35.984604       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:46.000499       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:46.000612       1 main.go:227] handling current node
	I0610 12:32:12.379506    8536 command_runner.go:130] ! I0610 12:14:46.000635       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380247    8536 command_runner.go:130] ! I0610 12:14:46.000650       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.380247    8536 command_runner.go:130] ! I0610 12:14:56.007468       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:14:56.007626       1 main.go:227] handling current node
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:14:56.007642       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:14:56.007651       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:15:06.022181       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.380524    8536 command_runner.go:130] ! I0610 12:15:06.022286       1 main.go:227] handling current node
	I0610 12:32:12.380592    8536 command_runner.go:130] ! I0610 12:15:06.022302       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380592    8536 command_runner.go:130] ! I0610 12:15:06.022312       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.380592    8536 command_runner.go:130] ! I0610 12:15:16.038901       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.380701    8536 command_runner.go:130] ! I0610 12:15:16.038992       1 main.go:227] handling current node
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:16.039008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:16.039016       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:26.062184       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:26.062279       1 main.go:227] handling current node
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:26.062296       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.380768    8536 command_runner.go:130] ! I0610 12:15:26.062304       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381307    8536 command_runner.go:130] ! I0610 12:15:36.071408       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:36.071540       1 main.go:227] handling current node
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:36.071556       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:36.071564       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:46.078051       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.381376    8536 command_runner.go:130] ! I0610 12:15:46.078158       1 main.go:227] handling current node
	I0610 12:32:12.381476    8536 command_runner.go:130] ! I0610 12:15:46.078176       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.381476    8536 command_runner.go:130] ! I0610 12:15:46.078184       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381563    8536 command_runner.go:130] ! I0610 12:15:56.086545       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.381752    8536 command_runner.go:130] ! I0610 12:15:56.086647       1 main.go:227] handling current node
	I0610 12:32:12.381752    8536 command_runner.go:130] ! I0610 12:15:56.086663       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.381752    8536 command_runner.go:130] ! I0610 12:15:56.086671       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381752    8536 command_runner.go:130] ! I0610 12:16:06.094871       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.381850    8536 command_runner.go:130] ! I0610 12:16:06.094920       1 main.go:227] handling current node
	I0610 12:32:12.381881    8536 command_runner.go:130] ! I0610 12:16:06.094935       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.381881    8536 command_runner.go:130] ! I0610 12:16:06.094958       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.381881    8536 command_runner.go:130] ! I0610 12:16:16.109713       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:16.110282       1 main.go:227] handling current node
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:16.110679       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:16.110879       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:26.124392       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:26.124492       1 main.go:227] handling current node
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:26.124507       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.382558    8536 command_runner.go:130] ! I0610 12:16:26.124514       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383431    8536 command_runner.go:130] ! I0610 12:16:36.130696       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383431    8536 command_runner.go:130] ! I0610 12:16:36.130864       1 main.go:227] handling current node
	I0610 12:32:12.383474    8536 command_runner.go:130] ! I0610 12:16:36.130880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383474    8536 command_runner.go:130] ! I0610 12:16:36.130888       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383512    8536 command_runner.go:130] ! I0610 12:16:46.145505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383537    8536 command_runner.go:130] ! I0610 12:16:46.145897       1 main.go:227] handling current node
	I0610 12:32:12.383568    8536 command_runner.go:130] ! I0610 12:16:46.146067       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383568    8536 command_runner.go:130] ! I0610 12:16:46.146083       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383568    8536 command_runner.go:130] ! I0610 12:16:56.160466       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:16:56.160571       1 main.go:227] handling current node
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:16:56.160586       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:16:56.160594       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:17:06.173930       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383629    8536 command_runner.go:130] ! I0610 12:17:06.173977       1 main.go:227] handling current node
	I0610 12:32:12.383695    8536 command_runner.go:130] ! I0610 12:17:06.173992       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383695    8536 command_runner.go:130] ! I0610 12:17:06.173999       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383695    8536 command_runner.go:130] ! I0610 12:17:16.180797       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383695    8536 command_runner.go:130] ! I0610 12:17:16.180971       1 main.go:227] handling current node
	I0610 12:32:12.383756    8536 command_runner.go:130] ! I0610 12:17:16.181005       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383756    8536 command_runner.go:130] ! I0610 12:17:16.181031       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383804    8536 command_runner.go:130] ! I0610 12:17:26.197081       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383826    8536 command_runner.go:130] ! I0610 12:17:26.197184       1 main.go:227] handling current node
	I0610 12:32:12.383826    8536 command_runner.go:130] ! I0610 12:17:26.197201       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383826    8536 command_runner.go:130] ! I0610 12:17:26.197210       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:36.204586       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:36.204700       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:36.204716       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:36.204725       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:46.214904       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:46.215024       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:46.215040       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:46.215048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:56.228072       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:56.228173       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:56.228189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:17:56.228197       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:06.237192       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:06.237303       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:06.237329       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:06.237354       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:16.244574       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:16.244799       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:16.244837       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:16.244863       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:26.258608       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:26.258654       1 main.go:227] handling current node
	I0610 12:32:12.383862    8536 command_runner.go:130] ! I0610 12:18:26.258669       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.384400    8536 command_runner.go:130] ! I0610 12:18:26.258676       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.385009    8536 command_runner.go:130] ! I0610 12:18:36.264620       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.385009    8536 command_runner.go:130] ! I0610 12:18:36.264824       1 main.go:227] handling current node
	I0610 12:32:12.385009    8536 command_runner.go:130] ! I0610 12:18:36.264841       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.385009    8536 command_runner.go:130] ! I0610 12:18:36.264850       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:46.275317       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:46.275426       1 main.go:227] handling current node
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:46.275460       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:46.275469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:56.290965       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.385974    8536 command_runner.go:130] ! I0610 12:18:56.291027       1 main.go:227] handling current node
	I0610 12:32:12.386060    8536 command_runner.go:130] ! I0610 12:18:56.291041       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386060    8536 command_runner.go:130] ! I0610 12:18:56.291048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:06.298370       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:06.298512       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:06.298529       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:06.298537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:16.309110       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:16.309215       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:16.309232       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:16.309240       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:26.322583       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:26.322633       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:26.322647       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:26.322654       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:36.336250       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:36.336376       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:36.336392       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:36.336400       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:46.350996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:46.351137       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:46.351155       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:46.351164       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:56.356996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:56.357039       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:56.357052       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:19:56.357059       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:06.372114       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:06.372883       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:06.373032       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:06.373062       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:16.381023       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:16.381690       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:16.381940       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:16.381975       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:26.389178       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:26.389224       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:26.389240       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:26.389247       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:36.395687       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:36.395828       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:36.395844       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:36.395851       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:46.410656       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:46.410865       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:46.410882       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:46.410891       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:56.425296       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:56.425540       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:56.425625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:20:56.425639       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:06.439346       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:06.439393       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:06.439406       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:06.439413       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:16.450424       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:16.450594       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:16.450628       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:16.450821       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:26.458379       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:26.458487       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:26.458503       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:26.458511       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:36.474243       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:36.474337       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:36.474354       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:36.474362       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:46.486635       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:46.486679       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:46.486693       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:46.486700       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:56.502256       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:56.502361       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:56.502377       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:21:56.502386       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:06.508796       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:06.508911       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:06.508928       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:06.508957       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:16.523863       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:16.523952       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:16.523970       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:16.523979       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:26.531516       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:26.531621       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:26.531637       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:26.531645       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:36.546403       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:36.546510       1 main.go:227] handling current node
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:36.546525       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.386098    8536 command_runner.go:130] ! I0610 12:22:36.546533       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388211    8536 command_runner.go:130] ! I0610 12:22:46.603429       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:46.603565       1 main.go:227] handling current node
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:46.603581       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:46.603590       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:56.619134       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:56.619253       1 main.go:227] handling current node
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:56.619287       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:22:56.619296       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:23:06.634307       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:23:06.634399       1 main.go:227] handling current node
	I0610 12:32:12.388262    8536 command_runner.go:130] ! I0610 12:23:06.634415       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388392    8536 command_runner.go:130] ! I0610 12:23:06.634424       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388392    8536 command_runner.go:130] ! I0610 12:23:16.649341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388392    8536 command_runner.go:130] ! I0610 12:23:16.649508       1 main.go:227] handling current node
	I0610 12:32:12.388392    8536 command_runner.go:130] ! I0610 12:23:16.649527       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388457    8536 command_runner.go:130] ! I0610 12:23:16.649539       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388457    8536 command_runner.go:130] ! I0610 12:23:26.662421       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388457    8536 command_runner.go:130] ! I0610 12:23:26.662451       1 main.go:227] handling current node
	I0610 12:32:12.388457    8536 command_runner.go:130] ! I0610 12:23:26.662462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:26.662468       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:36.669686       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:36.669734       1 main.go:227] handling current node
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:36.669822       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388517    8536 command_runner.go:130] ! I0610 12:23:36.669831       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:46.678078       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:46.678194       1 main.go:227] handling current node
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:46.678209       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:46.678217       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388584    8536 command_runner.go:130] ! I0610 12:23:56.685841       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388659    8536 command_runner.go:130] ! I0610 12:23:56.685884       1 main.go:227] handling current node
	I0610 12:32:12.388659    8536 command_runner.go:130] ! I0610 12:23:56.685898       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388659    8536 command_runner.go:130] ! I0610 12:23:56.685905       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388732    8536 command_runner.go:130] ! I0610 12:24:06.692341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388732    8536 command_runner.go:130] ! I0610 12:24:06.692609       1 main.go:227] handling current node
	I0610 12:32:12.388732    8536 command_runner.go:130] ! I0610 12:24:06.692699       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388732    8536 command_runner.go:130] ! I0610 12:24:06.692856       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388806    8536 command_runner.go:130] ! I0610 12:24:16.700494       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388806    8536 command_runner.go:130] ! I0610 12:24:16.700609       1 main.go:227] handling current node
	I0610 12:32:12.388806    8536 command_runner.go:130] ! I0610 12:24:16.700625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388806    8536 command_runner.go:130] ! I0610 12:24:16.700633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:26.716495       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:26.716609       1 main.go:227] handling current node
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:26.716625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:26.716633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:36.723606       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388862    8536 command_runner.go:130] ! I0610 12:24:36.723716       1 main.go:227] handling current node
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:36.723733       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:36.724254       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:46.739916       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:46.740008       1 main.go:227] handling current node
	I0610 12:32:12.388931    8536 command_runner.go:130] ! I0610 12:24:46.740402       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389004    8536 command_runner.go:130] ! I0610 12:24:46.740432       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389004    8536 command_runner.go:130] ! I0610 12:24:56.759676       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389004    8536 command_runner.go:130] ! I0610 12:24:56.760848       1 main.go:227] handling current node
	I0610 12:32:12.389004    8536 command_runner.go:130] ! I0610 12:24:56.760902       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389059    8536 command_runner.go:130] ! I0610 12:24:56.760914       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389059    8536 command_runner.go:130] ! I0610 12:25:06.771450       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389116    8536 command_runner.go:130] ! I0610 12:25:06.771514       1 main.go:227] handling current node
	I0610 12:32:12.389116    8536 command_runner.go:130] ! I0610 12:25:06.771530       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389116    8536 command_runner.go:130] ! I0610 12:25:06.771537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389116    8536 command_runner.go:130] ! I0610 12:25:16.778338       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389171    8536 command_runner.go:130] ! I0610 12:25:16.778445       1 main.go:227] handling current node
	I0610 12:32:12.389171    8536 command_runner.go:130] ! I0610 12:25:16.778461       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389171    8536 command_runner.go:130] ! I0610 12:25:16.778469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389171    8536 command_runner.go:130] ! I0610 12:25:26.791778       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389230    8536 command_runner.go:130] ! I0610 12:25:26.791933       1 main.go:227] handling current node
	I0610 12:32:12.389230    8536 command_runner.go:130] ! I0610 12:25:26.791950       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389230    8536 command_runner.go:130] ! I0610 12:25:26.791974       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389230    8536 command_runner.go:130] ! I0610 12:25:36.800633       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:36.800842       1 main.go:227] handling current node
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:36.800860       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:36.800869       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:46.815290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389285    8536 command_runner.go:130] ! I0610 12:25:46.815339       1 main.go:227] handling current node
	I0610 12:32:12.389341    8536 command_runner.go:130] ! I0610 12:25:46.815355       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389341    8536 command_runner.go:130] ! I0610 12:25:46.815363       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389341    8536 command_runner.go:130] ! I0610 12:25:56.830374       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389341    8536 command_runner.go:130] ! I0610 12:25:56.830439       1 main.go:227] handling current node
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.830471       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.830478       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.831222       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.831411       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389398    8536 command_runner.go:130] ! I0610 12:25:56.831494       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:12.389499    8536 command_runner.go:130] ! I0610 12:26:06.840295       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389499    8536 command_runner.go:130] ! I0610 12:26:06.840446       1 main.go:227] handling current node
	I0610 12:32:12.389499    8536 command_runner.go:130] ! I0610 12:26:06.840464       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389499    8536 command_runner.go:130] ! I0610 12:26:06.840913       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:06.845129       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:06.845329       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:16.860365       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:16.860476       1 main.go:227] handling current node
	I0610 12:32:12.389579    8536 command_runner.go:130] ! I0610 12:26:16.860493       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389642    8536 command_runner.go:130] ! I0610 12:26:16.860502       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389642    8536 command_runner.go:130] ! I0610 12:26:16.861223       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389642    8536 command_runner.go:130] ! I0610 12:26:16.861379       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.873719       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.873964       1 main.go:227] handling current node
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.874016       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.874181       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389701    8536 command_runner.go:130] ! I0610 12:26:26.874413       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:26.874451       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:36.881254       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:36.881366       1 main.go:227] handling current node
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:36.881382       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389759    8536 command_runner.go:130] ! I0610 12:26:36.881407       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389818    8536 command_runner.go:130] ! I0610 12:26:36.881814       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389818    8536 command_runner.go:130] ! I0610 12:26:36.881908       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389818    8536 command_runner.go:130] ! I0610 12:26:46.900700       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900797       1 main.go:227] handling current node
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900815       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900823       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389876    8536 command_runner.go:130] ! I0610 12:26:46.900985       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907395       1 main.go:227] handling current node
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907412       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907420       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907548       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.389949    8536 command_runner.go:130] ! I0610 12:26:56.907656       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922305       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922349       1 main.go:227] handling current node
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922361       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922367       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922490       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390046    8536 command_runner.go:130] ! I0610 12:27:06.922515       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390138    8536 command_runner.go:130] ! I0610 12:27:16.929579       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390138    8536 command_runner.go:130] ! I0610 12:27:16.929687       1 main.go:227] handling current node
	I0610 12:32:12.390138    8536 command_runner.go:130] ! I0610 12:27:16.929704       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:16.929712       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:16.930550       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:16.930641       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:26.944603       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:26.944719       1 main.go:227] handling current node
	I0610 12:32:12.390197    8536 command_runner.go:130] ! I0610 12:27:26.944772       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:26.945138       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:26.945535       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:26.945625       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:36.955188       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390265    8536 command_runner.go:130] ! I0610 12:27:36.955329       1 main.go:227] handling current node
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:36.955462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:36.955581       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:36.955956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:36.956158       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390427    8536 command_runner.go:130] ! I0610 12:27:46.965590       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.965717       1 main.go:227] handling current node
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.965826       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.965836       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.966598       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:46.966708       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:56.999276       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:56.999553       1 main.go:227] handling current node
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:56.999711       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:56.999728       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:57.000088       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:27:57.000177       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015069       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015281       1 main.go:227] handling current node
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015300       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015308       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015707       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:12.390485    8536 command_runner.go:130] ! I0610 12:28:07.015928       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:12.413148    8536 logs.go:123] Gathering logs for dmesg ...
	I0610 12:32:12.413148    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 12:32:12.439745    8536 command_runner.go:130] > [Jun10 12:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.132459] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.024371] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.082449] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +0.022513] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0610 12:32:12.440283    8536 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0610 12:32:12.440283    8536 command_runner.go:130] > [  +5.764981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +1.334692] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +1.227872] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +7.275008] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0610 12:32:12.440467    8536 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0610 12:32:12.440467    8536 command_runner.go:130] > [Jun10 12:30] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	I0610 12:32:12.440551    8536 command_runner.go:130] > [  +0.213819] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	I0610 12:32:12.440551    8536 command_runner.go:130] > [ +29.247267] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.109477] kauditd_printk_skb: 73 callbacks suppressed
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.638576] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.214581] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.255487] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +3.027967] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.239865] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0610 12:32:12.440582    8536 command_runner.go:130] > [  +0.216732] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +0.314976] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +0.112938] kauditd_printk_skb: 183 callbacks suppressed
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +0.871081] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +5.053506] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	I0610 12:32:12.440665    8536 command_runner.go:130] > [  +0.123809] kauditd_printk_skb: 34 callbacks suppressed
	I0610 12:32:12.440720    8536 command_runner.go:130] > [Jun10 12:31] kauditd_printk_skb: 62 callbacks suppressed
	I0610 12:32:12.440720    8536 command_runner.go:130] > [  +3.513215] hrtimer: interrupt took 368589 ns
	I0610 12:32:12.440720    8536 command_runner.go:130] > [  +0.107277] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0610 12:32:12.440720    8536 command_runner.go:130] > [  +7.541664] kauditd_printk_skb: 70 callbacks suppressed
	I0610 12:32:12.442368    8536 logs.go:123] Gathering logs for describe nodes ...
	I0610 12:32:12.442368    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 12:32:12.666754    8536 command_runner.go:130] > Name:               multinode-813300
	I0610 12:32:12.666754    8536 command_runner.go:130] > Roles:              control-plane
	I0610 12:32:12.666754    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:12.666754    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0610 12:32:12.667755    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:12.667755    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:12.667755    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	I0610 12:32:12.667755    8536 command_runner.go:130] > Taints:             <none>
	I0610 12:32:12.667755    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:12.667755    8536 command_runner.go:130] > Lease:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300
	I0610 12:32:12.667755    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:12.667755    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:32:10 +0000
	I0610 12:32:12.667755    8536 command_runner.go:130] > Conditions:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0610 12:32:12.667755    8536 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0610 12:32:12.667755    8536 command_runner.go:130] >   DiskPressure     False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0610 12:32:12.667755    8536 command_runner.go:130] >   PIDPressure      False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Ready            True    Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:31:40 +0000   KubeletReady                 kubelet is posting ready status
	I0610 12:32:12.667755    8536 command_runner.go:130] > Addresses:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   InternalIP:  172.17.150.144
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Hostname:    multinode-813300
	I0610 12:32:12.667755    8536 command_runner.go:130] > Capacity:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.667755    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.667755    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.667755    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.667755    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.667755    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.667755    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.667755    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.667755    8536 command_runner.go:130] > System Info:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Machine ID:                 8363a852b0fa420a8dccb009e6f4f9c7
	I0610 12:32:12.667755    8536 command_runner.go:130] >   System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Boot ID:                    a60b688f-6b78-4fa5-b21e-96a64e5c1047
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:12.667755    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:12.667755    8536 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0610 12:32:12.667755    8536 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0610 12:32:12.667755    8536 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0610 12:32:12.667755    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-z28tq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 etcd-multinode-813300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kindnet-29gbv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-813300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-813300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kube-proxy-nrpvt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-813300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0610 12:32:12.667755    8536 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0610 12:32:12.667755    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Resource           Requests     Limits
	I0610 12:32:12.667755    8536 command_runner.go:130] >   --------           --------     ------
	I0610 12:32:12.667755    8536 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0610 12:32:12.667755    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0610 12:32:12.667755    8536 command_runner.go:130] > Events:
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:12.667755    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:12.667755    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-813300 status is now: NodeReady
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:12.668749    8536 command_runner.go:130] > Name:               multinode-813300-m02
	I0610 12:32:12.668749    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:12.668749    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m02
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:12.668749    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:12.668749    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	I0610 12:32:12.668749    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:12.668749    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:12.668749    8536 command_runner.go:130] > Lease:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m02
	I0610 12:32:12.668749    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:12.668749    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:30 +0000
	I0610 12:32:12.668749    8536 command_runner.go:130] > Conditions:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:12.668749    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.668749    8536 command_runner.go:130] > Addresses:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   InternalIP:  172.17.151.128
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Hostname:    multinode-813300-m02
	I0610 12:32:12.668749    8536 command_runner.go:130] > Capacity:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.668749    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.668749    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.668749    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.668749    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.668749    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.668749    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.668749    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.668749    8536 command_runner.go:130] > System Info:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	I0610 12:32:12.668749    8536 command_runner.go:130] >   System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:12.668749    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:12.668749    8536 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0610 12:32:12.668749    8536 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0610 12:32:12.668749    8536 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0610 12:32:12.668749    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-czxmt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0610 12:32:12.668749    8536 command_runner.go:130] >   kube-system                 kindnet-r4nfq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0610 12:32:12.668749    8536 command_runner.go:130] >   kube-system                 kube-proxy-rx2b2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0610 12:32:12.668749    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:12.668749    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:12.668749    8536 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:12.668749    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:12.668749    8536 command_runner.go:130] > Events:
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:12.668749    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-813300-m02 status is now: NodeReady
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  NodeNotReady             3m57s              node-controller  Node multinode-813300-m02 status is now: NodeNotReady
	I0610 12:32:12.668749    8536 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:12.668749    8536 command_runner.go:130] > Name:               multinode-813300-m03
	I0610 12:32:12.668749    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:12.668749    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:12.668749    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m03
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_25_53_0700
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:12.669804    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:12.669804    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:25:52 +0000
	I0610 12:32:12.669804    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:12.669804    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:12.669804    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:12.669804    8536 command_runner.go:130] > Lease:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m03
	I0610 12:32:12.669804    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:12.669804    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:04 +0000
	I0610 12:32:12.669804    8536 command_runner.go:130] > Conditions:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:12.669804    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.669804    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.669804    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:12.669804    8536 command_runner.go:130] > Addresses:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   InternalIP:  172.17.144.46
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Hostname:    multinode-813300-m03
	I0610 12:32:12.669804    8536 command_runner.go:130] > Capacity:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.669804    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.669804    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.669804    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.669804    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:12.669804    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:12.669804    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:12.669804    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:12.669804    8536 command_runner.go:130] > System Info:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Machine ID:                 2d60e1f6e3b2454db505a650eae61212
	I0610 12:32:12.669804    8536 command_runner.go:130] >   System UUID:                b38b4a9a-39f6-6f43-9e6d-19433dc62cd9
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Boot ID:                    0a419483-5289-4d17-96c2-fd4487360412
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:12.669804    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:12.669804    8536 command_runner.go:130] > PodCIDR:                      10.244.2.0/24
	I0610 12:32:12.669804    8536 command_runner.go:130] > PodCIDRs:                     10.244.2.0/24
	I0610 12:32:12.669804    8536 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0610 12:32:12.669804    8536 command_runner.go:130] >   kube-system                 kindnet-2pc4j       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m20s
	I0610 12:32:12.669804    8536 command_runner.go:130] >   kube-system                 kube-proxy-vw56h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	I0610 12:32:12.669804    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:12.669804    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:12.669804    8536 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:12.669804    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:12.669804    8536 command_runner.go:130] > Events:
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0610 12:32:12.669804    8536 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  Starting                 6m7s                   kube-proxy       
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m20s (x2 over 6m20s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientMemory
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m20s (x2 over 6m20s)  kubelet          Node multinode-813300-m03 status is now: NodeHasNoDiskPressure
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m20s (x2 over 6m20s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientPID
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:12.669804    8536 command_runner.go:130] >   Normal  RegisteredNode           6m18s                  node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:12.670789    8536 command_runner.go:130] >   Normal  NodeReady                5m59s                  kubelet          Node multinode-813300-m03 status is now: NodeReady
	I0610 12:32:12.670789    8536 command_runner.go:130] >   Normal  NodeNotReady             4m28s                  node-controller  Node multinode-813300-m03 status is now: NodeNotReady
	I0610 12:32:12.670789    8536 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:12.679741    8536 logs.go:123] Gathering logs for kube-apiserver [d7941126134f] ...
	I0610 12:32:12.680767    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7941126134f"
	I0610 12:32:12.725360    8536 command_runner.go:130] ! I0610 12:30:56.783636       1 options.go:221] external host was not specified, using 172.17.150.144
	I0610 12:32:12.725360    8536 command_runner.go:130] ! I0610 12:30:56.802716       1 server.go:148] Version: v1.30.1
	I0610 12:32:12.725429    8536 command_runner.go:130] ! I0610 12:30:56.802771       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:12.725429    8536 command_runner.go:130] ! I0610 12:30:57.206580       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 12:32:12.725429    8536 command_runner.go:130] ! I0610 12:30:57.224598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:12.725491    8536 command_runner.go:130] ! I0610 12:30:57.225809       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 12:32:12.725553    8536 command_runner.go:130] ! I0610 12:30:57.226002       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 12:32:12.725708    8536 command_runner.go:130] ! I0610 12:30:57.226365       1 instance.go:299] Using reconciler: lease
	I0610 12:32:12.725770    8536 command_runner.go:130] ! I0610 12:30:57.637999       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0610 12:32:12.725871    8536 command_runner.go:130] ! W0610 12:30:57.638403       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.725871    8536 command_runner.go:130] ! I0610 12:30:58.007103       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0610 12:32:12.726572    8536 command_runner.go:130] ! I0610 12:30:58.008169       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0610 12:32:12.726572    8536 command_runner.go:130] ! I0610 12:30:58.357732       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0610 12:32:12.726572    8536 command_runner.go:130] ! I0610 12:30:58.553660       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0610 12:32:12.726572    8536 command_runner.go:130] ! I0610 12:30:58.567826       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0610 12:32:12.726572    8536 command_runner.go:130] ! W0610 12:30:58.567936       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726642    8536 command_runner.go:130] ! W0610 12:30:58.567947       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.726694    8536 command_runner.go:130] ! I0610 12:30:58.569137       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0610 12:32:12.726712    8536 command_runner.go:130] ! W0610 12:30:58.569236       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726712    8536 command_runner.go:130] ! I0610 12:30:58.570636       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0610 12:32:12.726745    8536 command_runner.go:130] ! I0610 12:30:58.572063       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0610 12:32:12.726745    8536 command_runner.go:130] ! W0610 12:30:58.572082       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0610 12:32:12.726779    8536 command_runner.go:130] ! W0610 12:30:58.572088       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0610 12:32:12.726808    8536 command_runner.go:130] ! I0610 12:30:58.575154       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0610 12:32:12.726808    8536 command_runner.go:130] ! W0610 12:30:58.575194       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0610 12:32:12.726835    8536 command_runner.go:130] ! I0610 12:30:58.576862       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0610 12:32:12.726869    8536 command_runner.go:130] ! W0610 12:30:58.576966       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726869    8536 command_runner.go:130] ! W0610 12:30:58.576976       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.726869    8536 command_runner.go:130] ! I0610 12:30:58.577920       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0610 12:32:12.726904    8536 command_runner.go:130] ! W0610 12:30:58.578059       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726904    8536 command_runner.go:130] ! W0610 12:30:58.578305       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726941    8536 command_runner.go:130] ! I0610 12:30:58.579295       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0610 12:32:12.726941    8536 command_runner.go:130] ! I0610 12:30:58.581807       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0610 12:32:12.726941    8536 command_runner.go:130] ! W0610 12:30:58.581943       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.726941    8536 command_runner.go:130] ! W0610 12:30:58.582127       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727024    8536 command_runner.go:130] ! I0610 12:30:58.583254       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0610 12:32:12.727024    8536 command_runner.go:130] ! W0610 12:30:58.583359       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727024    8536 command_runner.go:130] ! W0610 12:30:58.583370       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727024    8536 command_runner.go:130] ! I0610 12:30:58.594003       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0610 12:32:12.727024    8536 command_runner.go:130] ! W0610 12:30:58.594046       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0610 12:32:12.727024    8536 command_runner.go:130] ! I0610 12:30:58.597008       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0610 12:32:12.727024    8536 command_runner.go:130] ! W0610 12:30:58.597028       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727125    8536 command_runner.go:130] ! W0610 12:30:58.597047       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727125    8536 command_runner.go:130] ! I0610 12:30:58.597658       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0610 12:32:12.727125    8536 command_runner.go:130] ! W0610 12:30:58.597679       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727125    8536 command_runner.go:130] ! W0610 12:30:58.597686       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727125    8536 command_runner.go:130] ! I0610 12:30:58.602889       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0610 12:32:12.727191    8536 command_runner.go:130] ! W0610 12:30:58.602907       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727191    8536 command_runner.go:130] ! W0610 12:30:58.602913       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727191    8536 command_runner.go:130] ! I0610 12:30:58.608646       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0610 12:32:12.727245    8536 command_runner.go:130] ! I0610 12:30:58.610262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0610 12:32:12.727280    8536 command_runner.go:130] ! W0610 12:30:58.610275       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0610 12:32:12.727309    8536 command_runner.go:130] ! W0610 12:30:58.610281       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727309    8536 command_runner.go:130] ! I0610 12:30:58.619816       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0610 12:32:12.727339    8536 command_runner.go:130] ! W0610 12:30:58.619856       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0610 12:32:12.727339    8536 command_runner.go:130] ! W0610 12:30:58.619866       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0610 12:32:12.727379    8536 command_runner.go:130] ! I0610 12:30:58.627044       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0610 12:32:12.727379    8536 command_runner.go:130] ! W0610 12:30:58.627092       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727417    8536 command_runner.go:130] ! W0610 12:30:58.627296       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:12.727417    8536 command_runner.go:130] ! I0610 12:30:58.629017       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0610 12:32:12.727417    8536 command_runner.go:130] ! W0610 12:30:58.629067       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727476    8536 command_runner.go:130] ! I0610 12:30:58.659122       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0610 12:32:12.727500    8536 command_runner.go:130] ! W0610 12:30:58.659244       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:12.727523    8536 command_runner.go:130] ! I0610 12:30:59.341469       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:12.727550    8536 command_runner.go:130] ! I0610 12:30:59.341814       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:12.727550    8536 command_runner.go:130] ! I0610 12:30:59.341806       1 secure_serving.go:213] Serving securely on [::]:8443
	I0610 12:32:12.729353    8536 command_runner.go:130] ! I0610 12:30:59.342486       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0610 12:32:12.729677    8536 command_runner.go:130] ! I0610 12:30:59.342867       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0610 12:32:12.730522    8536 command_runner.go:130] ! I0610 12:30:59.342901       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0610 12:32:12.730522    8536 command_runner.go:130] ! I0610 12:30:59.342987       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0610 12:32:12.730522    8536 command_runner.go:130] ! I0610 12:30:59.341865       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:12.731087    8536 command_runner.go:130] ! I0610 12:30:59.344865       1 controller.go:116] Starting legacy_token_tracking_controller
	I0610 12:32:12.731087    8536 command_runner.go:130] ! I0610 12:30:59.344899       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0610 12:32:12.731087    8536 command_runner.go:130] ! I0610 12:30:59.346737       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0610 12:32:12.731087    8536 command_runner.go:130] ! I0610 12:30:59.346910       1 available_controller.go:423] Starting AvailableConditionController
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.346960       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347078       1 aggregator.go:163] waiting for initial CRD sync...
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347170       1 controller.go:78] Starting OpenAPI AggregationController
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347256       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347656       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0610 12:32:12.731147    8536 command_runner.go:130] ! I0610 12:30:59.347947       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0610 12:32:12.731247    8536 command_runner.go:130] ! I0610 12:30:59.348233       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0610 12:32:12.731247    8536 command_runner.go:130] ! I0610 12:30:59.348295       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.341877       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.377996       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.378109       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.378362       1 controller.go:139] Starting OpenAPI controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.378742       1 controller.go:87] Starting OpenAPI V3 controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.378883       1 naming_controller.go:291] Starting NamingConditionController
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379043       1 establishing_controller.go:76] Starting EstablishingController
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379247       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379438       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379518       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379777       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.379999       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.524664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.525326       1 policy_source.go:224] refreshing policies
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.543486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.547084       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.548579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.549972       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.550011       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.551151       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.554229       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.560228       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578343       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578414       1 aggregator.go:165] initial CRD sync complete...
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578429       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.578466       1 cache.go:39] Caches are synced for autoregister controller
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:30:59.606740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:00.360768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 12:32:12.731286    8536 command_runner.go:130] ! W0610 12:31:00.893787       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.150.144]
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:00.913283       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:00.933946       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:02.471259       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:02.690867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 12:32:12.731286    8536 command_runner.go:130] ! I0610 12:31:02.714405       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:32:12.731820    8536 command_runner.go:130] ! I0610 12:31:02.840117       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 12:32:12.731820    8536 command_runner.go:130] ! I0610 12:31:02.856715       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 12:32:12.740531    8536 logs.go:123] Gathering logs for Docker ...
	I0610 12:32:12.740531    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 12:32:12.768085    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:12.768648    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.769173    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:12.769173    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.665734294Z" level=info msg="Starting up"
	I0610 12:32:12.769223    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.666799026Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:12.769268    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.668025832Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0610 12:32:12.769296    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.707077561Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:12.769296    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745342414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:12.769358    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745425201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:12.769358    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745528085Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:12.769397    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745580077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769397    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746319960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.769440    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746463837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746722696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746775088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746796184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746813182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.747203320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.748049086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752393000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.769468    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752519780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.770079    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752692453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.770098    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752790737Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753305956Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753420338Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753439135Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759080243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759316106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759347801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759374497Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759392594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759476281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759928509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760128877Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760824467Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760850663Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760867361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760883758Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760898556Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760914553Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760935350Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760951047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770158    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760966645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770867    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760986442Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761064230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761105323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761128319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761143417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761158215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761173012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761187310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761210007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761455768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761477764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761493962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761507660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761522057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761538755Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761561351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761583448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761598445Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761652437Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761676833Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:12.770906    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761691230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761709928Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761721526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761735324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761752021Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762164056Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762290536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762532698Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762557794Z" level=info msg="containerd successfully booted in 0.059804s"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.723660372Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.979070633Z" level=info msg="Loading containers: start."
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.430556665Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.525359393Z" level=info msg="Loading containers: done."
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.555368825Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.556499190Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614621979Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614710469Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.105858304Z" level=info msg="Processing signal 'terminated'"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.107858244Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 systemd[1]: Stopping Docker Application Container Engine...
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109274172Z" level=info msg="Daemon shutdown complete"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109439076Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109591179Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: docker.service: Deactivated successfully.
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Stopped Docker Application Container Engine.
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.200932485Z" level=info msg="Starting up"
	I0610 12:32:12.771786    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.202989526Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:12.772354    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.204789062Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0610 12:32:12.772411    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.250167169Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:12.772480    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291799101Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:12.772480    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291856902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:12.772480    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291930003Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:12.772480    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291948904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772548    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291983304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.772548    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291997405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772606    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292182308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.772606    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292287811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772667    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292310511Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:12.772721    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292322911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772721    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292350212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772756    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292701119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772794    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.295953884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.772832    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296063086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:12.772893    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296411793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:12.772945    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296455694Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:12.772945    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296587396Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:12.773470    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296721299Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:12.773470    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296741600Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:12.773470    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296941504Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:12.773531    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297027105Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:12.773531    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297046206Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:12.773626    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297078906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:12.773645    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297254610Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:12.773645    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297334111Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:12.773645    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297955024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:12.773719    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298031825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:12.773719    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298071126Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:12.773719    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298090126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:12.773791    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298105527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773791    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298120527Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773791    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298155728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298172828Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298189828Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298204229Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298218329Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298230929Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298260030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298281530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298296531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298318131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298333531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298494735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298514735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298529635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.773858    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298592837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298610037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298624437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298639137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298652438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298669738Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298693539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298708139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298720839Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298773440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298792441Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298805041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298820841Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:12.774514    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298832741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298850742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298862942Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299109447Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299202249Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299272150Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299312051Z" level=info msg="containerd successfully booted in 0.052836s"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.253253712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.287070988Z" level=info msg="Loading containers: start."
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.612574192Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.704084520Z" level=info msg="Loading containers: done."
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733112200Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733256003Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.788468006Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.790252742Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Loaded network plugin cni"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start cri-dockerd grpc backend"
	I0610 12:32:12.775509    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-kbhvv_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c\""
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-z28tq_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d\""
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013449453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013587556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013608856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013775860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.087769538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089579074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089879880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.090133785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183156944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183215145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183227346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183318447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f56cc8af37db0f3fea8de363d927c6924c7ad7e81f4908f6f5c87d6c0db17a61/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244245765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244411968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244427968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244593672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8902dac03acbce14b7e106bff482e591dd574972082943e9adda30969716a707/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b13c0058ce265f3c4b18ec59cbb42b72803807a8d96330756114b2526fffa2de/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611175897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611296299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611337700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.612109315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730665784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730725385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730738886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730907689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848373736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848822145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851216993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851612501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900274973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900404876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900419576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900508378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:59Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0610 12:32:12.776516    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830014876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830867993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831086098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831510106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854754571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854918174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.857723530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.858668949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.877394923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878360042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878507645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.879086357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d997d7c306c2a08fab9e0e53bd14a9da495d8b0abdad38c9935489b788eccd/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2dd9b423841c9fee92dc2a884fe8f45fb9dd5b8713214ce8804ac8ced10629d1/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337790622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337963526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337992226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.338102629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.394005846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396505296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396667999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396999105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c19b39e15f6ae82627ffedaf799ef63dd09554d65260dbfc8856b08a4ce7354/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.711733694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712144402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712256705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712964519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.980963328Z" level=info msg="shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981035932Z" level=warning msg="cleaning up after shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981047633Z" level=info msg="cleaning up dead shim" namespace=moby
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1052]: time="2024-06-10T12:31:31.981399154Z" level=info msg="ignoring event" container=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.486941957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487165464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487187665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.488142597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345354892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345592698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345620799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345913706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.511059667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.777521    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512286197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512437501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512775109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241c4811748facbb85003522d513039c3dfc5b38006b7f1cba90a5e411055e97/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4d124cebb3b3affe7ace090f1a152544207db26621b5b4098cad87e3db47a4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955148547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955266050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955283650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955812861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444723816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444892597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444914895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.778507    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.445846695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:12.810555    8536 logs.go:123] Gathering logs for kube-proxy [afad8b05897e] ...
	I0610 12:32:12.810555    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 afad8b05897e"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:12.842859    8536 command_runner.go:130] ! I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:12.843904    8536 logs.go:123] Gathering logs for container status ...
	I0610 12:32:12.843904    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 12:32:12.912460    8536 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0610 12:32:12.912460    8536 command_runner.go:130] > b9550940a81ca       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   c4d124cebb3b3       busybox-fc5497c4f-z28tq
	I0610 12:32:12.912460    8536 command_runner.go:130] > 24f3f7e041f98       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   241c4811748fa       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:12.912460    8536 command_runner.go:130] > e934ffe0f9032       6e38f40d628db                                                                                         25 seconds ago       Running             storage-provisioner       2                   2dd9b423841c9       storage-provisioner
	I0610 12:32:12.912460    8536 command_runner.go:130] > c3c4316beca64       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   0c19b39e15f6a       kindnet-29gbv
	I0610 12:32:12.912460    8536 command_runner.go:130] > cc9dbe4aa4005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   2dd9b423841c9       storage-provisioner
	I0610 12:32:12.912460    8536 command_runner.go:130] > 1de5fa0ef8384       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   06d997d7c306c       kube-proxy-nrpvt
	I0610 12:32:12.912460    8536 command_runner.go:130] > d7941126134f2       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   5c3da3b59b527       kube-apiserver-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > 877ee07c14997       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   b13c0058ce265       etcd-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > d90e72ef46704       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   8902dac03acbc       kube-scheduler-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > 3bee53d5fef91       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   f56cc8af37db0       kube-controller-manager-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > 91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	I0610 12:32:12.912460    8536 command_runner.go:130] > f2e39052db195       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:12.912460    8536 command_runner.go:130] > c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	I0610 12:32:12.912460    8536 command_runner.go:130] > afad8b05897e5       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	I0610 12:32:12.912460    8536 command_runner.go:130] > bd1a6cd987430       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	I0610 12:32:12.912460    8536 command_runner.go:130] > f1409bf44ff14       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	I0610 12:32:12.915441    8536 logs.go:123] Gathering logs for etcd [877ee07c1499] ...
	I0610 12:32:12.915441    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877ee07c1499"
	I0610 12:32:12.950726    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.207374Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:12.950726    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208407Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.150.144:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.150.144:2380","--initial-cluster=multinode-813300=https://172.17.150.144:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.150.144:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.150.144:2380","--name=multinode-813300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0610 12:32:12.950726    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208499Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0610 12:32:12.951546    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.208577Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:12.951546    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208593Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.150.144:2380"]}
	I0610 12:32:12.951610    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208715Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:12.951610    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.218326Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"]}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.22047Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-813300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"ini
tial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.244201Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.944438ms"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.274404Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.303075Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","commit-index":1913}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=()"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became follower at term 2"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8f4442f54c46fb8d [peers: [], term: 2, commit: 1913, applied: 0, lastindex: 1913, lastterm: 2]"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.318917Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.323726Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1273}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.328272Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1642}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.335671Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.347777Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8f4442f54c46fb8d","timeout":"7s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.349755Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8f4442f54c46fb8d"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.350228Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8f4442f54c46fb8d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.352715Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.36067Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361057Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361302Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=(10323449867154160525)"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363612Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","added-peer-id":"8f4442f54c46fb8d","added-peer-peer-urls":["https://172.17.159.171:2380"]}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364067Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0610 12:32:12.951746    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.367772Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:12.952272    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.373962Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.150.144:2380"}
	I0610 12:32:12.952318    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.374209Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.150.144:2380"}
	I0610 12:32:12.952364    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375497Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8f4442f54c46fb8d","initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0610 12:32:12.952413    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375805Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0610 12:32:12.952468    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d is starting a new election at term 2"}
	I0610 12:32:12.952468    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.50539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became pre-candidate at term 2"}
	I0610 12:32:12.952468    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgPreVoteResp from 8f4442f54c46fb8d at term 2"}
	I0610 12:32:12.952468    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became candidate at term 3"}
	I0610 12:32:12.952550    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgVoteResp from 8f4442f54c46fb8d at term 3"}
	I0610 12:32:12.952582    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 3"}
	I0610 12:32:12.952582    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 3"}
	I0610 12:32:12.952582    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.511486Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.150.144:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	I0610 12:32:12.952651    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:12.952690    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:12.952690    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.517481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0610 12:32:12.952690    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0610 12:32:12.952742    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520973Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0610 12:32:12.952742    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.543402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.150.144:2379"}
	I0610 12:32:12.963241    8536 logs.go:123] Gathering logs for coredns [f2e39052db19] ...
	I0610 12:32:12.963241    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e39052db19"
	I0610 12:32:12.997242    8536 command_runner.go:130] > .:53
	I0610 12:32:12.998246    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:12.998310    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:12.998310    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 127.0.0.1:46276 - 35337 "HINFO IN 965239639799927989.3587586823131848737. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.052340371s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:36040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003047s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:51901 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.165635405s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:38890 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.065664181s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:40219 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.107303974s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002396s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:57966 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001307s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38338 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002131s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:41898 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000121s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:49043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200101s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:53918 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.147842886s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:50531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001726s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:41881 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001246s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:34708 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.030026838s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002834s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:58166 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001901s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.1.2:46174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001048s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:52212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003513s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	I0610 12:32:12.998310    8536 command_runner.go:130] > [INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	I0610 12:32:12.998887    8536 command_runner.go:130] > [INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	I0610 12:32:12.998967    8536 command_runner.go:130] > [INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	I0610 12:32:12.998990    8536 command_runner.go:130] > [INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	I0610 12:32:12.999111    8536 command_runner.go:130] > [INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	I0610 12:32:12.999210    8536 command_runner.go:130] > [INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	I0610 12:32:12.999272    8536 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	I0610 12:32:12.999272    8536 command_runner.go:130] > [INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	I0610 12:32:12.999272    8536 command_runner.go:130] > [INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	I0610 12:32:12.999272    8536 command_runner.go:130] > [INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0610 12:32:12.999345    8536 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0610 12:32:13.002280    8536 logs.go:123] Gathering logs for kube-scheduler [d90e72ef4670] ...
	I0610 12:32:13.002345    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90e72ef4670"
	I0610 12:32:13.034151    8536 command_runner.go:130] ! I0610 12:30:56.811878       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:13.035221    8536 command_runner.go:130] ! W0610 12:30:59.481898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:13.035221    8536 command_runner.go:130] ! W0610 12:30:59.482123       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:13.035221    8536 command_runner.go:130] ! W0610 12:30:59.482217       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:13.035221    8536 command_runner.go:130] ! W0610 12:30:59.482255       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.514164       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.514266       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.518405       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.518496       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.518958       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.519337       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:13.035221    8536 command_runner.go:130] ! I0610 12:30:59.619122       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:13.038028    8536 logs.go:123] Gathering logs for kube-controller-manager [f1409bf44ff1] ...
	I0610 12:32:13.038028    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1409bf44ff1"
	I0610 12:32:13.076581    8536 command_runner.go:130] ! I0610 12:07:55.502430       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:13.076581    8536 command_runner.go:130] ! I0610 12:07:56.114557       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:13.076707    8536 command_runner.go:130] ! I0610 12:07:56.114858       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.076707    8536 command_runner.go:130] ! I0610 12:07:56.117078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:13.076707    8536 command_runner.go:130] ! I0610 12:07:56.117365       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:13.076707    8536 command_runner.go:130] ! I0610 12:07:56.118392       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:13.076772    8536 command_runner.go:130] ! I0610 12:07:56.118623       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:13.076802    8536 command_runner.go:130] ! I0610 12:08:00.413505       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:13.076802    8536 command_runner.go:130] ! I0610 12:08:00.413532       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:13.076837    8536 command_runner.go:130] ! I0610 12:08:00.454038       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:13.076873    8536 command_runner.go:130] ! I0610 12:08:00.454303       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:13.076873    8536 command_runner.go:130] ! I0610 12:08:00.454341       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:13.076921    8536 command_runner.go:130] ! I0610 12:08:00.474947       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:13.076921    8536 command_runner.go:130] ! I0610 12:08:00.475105       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:00.475116       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:00.514703       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.509914       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.510020       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.511115       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.511148       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.515475       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.515547       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.516308       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.516334       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.516340       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.531416       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.531434       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.531293       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.543960       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.544630       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.544667       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.567000       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.567602       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.568240       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.586627       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.587637       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.587654       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.623685       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.623975       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.624342       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.639985       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.640617       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.640810       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.702326       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.706246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:13.076953    8536 command_runner.go:130] ! I0610 12:08:10.711937       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:13.077479    8536 command_runner.go:130] ! I0610 12:08:10.712131       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.712146       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.712235       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.712265       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.724980       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.726393       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:13.077520    8536 command_runner.go:130] ! I0610 12:08:10.726653       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:13.077601    8536 command_runner.go:130] ! I0610 12:08:10.742390       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:13.077626    8536 command_runner.go:130] ! I0610 12:08:10.743099       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:13.077626    8536 command_runner.go:130] ! I0610 12:08:10.744498       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:13.077626    8536 command_runner.go:130] ! I0610 12:08:10.759177       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:13.077626    8536 command_runner.go:130] ! I0610 12:08:10.759262       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:13.077707    8536 command_runner.go:130] ! I0610 12:08:10.759917       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:13.077707    8536 command_runner.go:130] ! I0610 12:08:10.759932       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:13.077707    8536 command_runner.go:130] ! I0610 12:08:10.901245       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:13.077707    8536 command_runner.go:130] ! I0610 12:08:10.903470       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:13.077791    8536 command_runner.go:130] ! I0610 12:08:10.903502       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:13.077791    8536 command_runner.go:130] ! I0610 12:08:11.064066       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:13.077791    8536 command_runner.go:130] ! I0610 12:08:11.064123       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:13.077872    8536 command_runner.go:130] ! I0610 12:08:11.064135       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:13.077872    8536 command_runner.go:130] ! I0610 12:08:11.202164       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:13.077925    8536 command_runner.go:130] ! I0610 12:08:11.202227       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:13.077925    8536 command_runner.go:130] ! I0610 12:08:11.202239       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:13.077960    8536 command_runner.go:130] ! I0610 12:08:11.352380       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:13.077997    8536 command_runner.go:130] ! I0610 12:08:11.352546       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:13.078037    8536 command_runner.go:130] ! I0610 12:08:11.352575       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:13.078037    8536 command_runner.go:130] ! I0610 12:08:11.656918       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:13.078037    8536 command_runner.go:130] ! I0610 12:08:11.657560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:13.078109    8536 command_runner.go:130] ! I0610 12:08:11.657950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:13.078109    8536 command_runner.go:130] ! I0610 12:08:11.658269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:13.078109    8536 command_runner.go:130] ! I0610 12:08:11.658437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:13.078169    8536 command_runner.go:130] ! I0610 12:08:11.658699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:13.078209    8536 command_runner.go:130] ! I0610 12:08:11.658785       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:13.078288    8536 command_runner.go:130] ! I0610 12:08:11.658822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:13.078288    8536 command_runner.go:130] ! I0610 12:08:11.658849       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:13.078288    8536 command_runner.go:130] ! I0610 12:08:11.658870       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:13.078343    8536 command_runner.go:130] ! I0610 12:08:11.658895       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.658915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.658950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.658987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659056       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! W0610 12:08:11.659073       1 shared_informer.go:597] resyncPeriod 13h6m28.341601393s is smaller than resyncCheckPeriod 19h0m49.916968618s and the informer has already started. Changing it to 19h0m49.916968618s
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659195       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659214       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659236       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659287       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659312       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659579       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659591       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.659608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.895313       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.895383       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.895693       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:11.896490       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.154521       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.154576       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.154658       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.154690       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.301351       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.301495       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.301508       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.495309       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.495425       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.495645       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.495683       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:13.078393    8536 command_runner.go:130] ! E0610 12:08:12.550245       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:13.078393    8536 command_runner.go:130] ! I0610 12:08:12.550671       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:13.078930    8536 command_runner.go:130] ! E0610 12:08:12.700493       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:13.078930    8536 command_runner.go:130] ! I0610 12:08:12.700528       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.700538       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.859280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.859580       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.859953       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:13.079010    8536 command_runner.go:130] ! I0610 12:08:12.906626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:13.079123    8536 command_runner.go:130] ! I0610 12:08:12.907724       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:13.079123    8536 command_runner.go:130] ! I0610 12:08:13.050431       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:13.079123    8536 command_runner.go:130] ! I0610 12:08:13.050510       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:13.079123    8536 command_runner.go:130] ! I0610 12:08:13.205885       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:13.079217    8536 command_runner.go:130] ! I0610 12:08:13.205970       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.205982       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.351713       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.351815       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.351830       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.603420       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.603498       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.603510       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.750262       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.750789       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.750809       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.900118       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.900639       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:13.900897       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.054008       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.054156       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.054170       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.199527       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.199627       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.199683       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.199694       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.351474       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.352193       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.352213       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502148       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502250       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502262       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502696       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.502825       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.546684       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.547077       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.547608       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.547097       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.079265    8536 command_runner.go:130] ! I0610 12:08:14.547127       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:13.079792    8536 command_runner.go:130] ! I0610 12:08:14.547931       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:13.079839    8536 command_runner.go:130] ! I0610 12:08:14.547138       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.547188       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.548434       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.547199       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.547257       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.548692       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.547265       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.558628       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.590023       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.600506       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.602694       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.603324       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.609611       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.612038       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.623629       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.624495       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.612329       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.628289       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.630516       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.630648       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.622860       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.627541       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.627554       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.627562       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.627813       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.631141       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.631364       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.631669       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.631834       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.642451       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.644828       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.645380       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.647678       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.648798       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.648809       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:13.079874    8536 command_runner.go:130] ! I0610 12:08:14.648848       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:13.080400    8536 command_runner.go:130] ! I0610 12:08:14.656075       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:13.080400    8536 command_runner.go:130] ! I0610 12:08:14.656781       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:13.080400    8536 command_runner.go:130] ! I0610 12:08:14.657449       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:13.080400    8536 command_runner.go:130] ! I0610 12:08:14.657643       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.658125       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.661079       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.668926       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.675620       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.680953       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300" podCIDRs=["10.244.0.0/24"]
	I0610 12:32:13.080452    8536 command_runner.go:130] ! I0610 12:08:14.687842       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:13.080541    8536 command_runner.go:130] ! I0610 12:08:14.751377       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.754827       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.795731       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.803976       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.807376       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:13.080586    8536 command_runner.go:130] ! I0610 12:08:14.807800       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:14.851108       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:14.858915       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:14.859692       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:14.864873       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:15.295934       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:15.296041       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:15.332772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:15.887603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="329.520484ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.024148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.478301ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.151441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.784808ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.151859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="288.402µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.577624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.03545ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.593339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.556101ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:16.593508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.3µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:30.535681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:30.566310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.4µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:32:13.080646    8536 command_runner.go:130] ! I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:32:13.081175    8536 command_runner.go:130] ! I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:32:13.081175    8536 command_runner.go:130] ! I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:32:13.081175    8536 command_runner.go:130] ! I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:32:13.081234    8536 command_runner.go:130] ! I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:32:13.081234    8536 command_runner.go:130] ! I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:32:13.081234    8536 command_runner.go:130] ! I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	I0610 12:32:13.081234    8536 command_runner.go:130] ! I0610 12:25:52.678571       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:13.081334    8536 command_runner.go:130] ! I0610 12:25:52.681612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:13.081334    8536 command_runner.go:130] ! I0610 12:25:52.698797       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m03" podCIDRs=["10.244.2.0/24"]
	I0610 12:32:13.081415    8536 command_runner.go:130] ! I0610 12:25:54.878967       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:13.081436    8536 command_runner.go:130] ! I0610 12:26:13.380155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:13.081436    8536 command_runner.go:130] ! I0610 12:27:44.944679       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:13.081436    8536 command_runner.go:130] ! I0610 12:28:15.516170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.644756ms"
	I0610 12:32:13.081436    8536 command_runner.go:130] ! I0610 12:28:15.516815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.1µs"
	I0610 12:32:13.099511    8536 logs.go:123] Gathering logs for kindnet [c3c4316beca6] ...
	I0610 12:32:13.099511    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c4316beca6"
	I0610 12:32:13.135489    8536 command_runner.go:130] ! I0610 12:31:02.264969       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:02.265572       1 main.go:107] hostIP = 172.17.150.144
	I0610 12:32:13.136494    8536 command_runner.go:130] ! podIP = 172.17.150.144
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:02.265708       1 main.go:116] setting mtu 1500 for CNI 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:02.265761       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:02.265778       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.684223       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.703397       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.703595       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.742189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.742230       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.742783       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.151.128 Flags: [] Table: 0} 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.743097       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.743120       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:32.743193       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750326       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750472       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750487       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750494       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750648       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:42.750678       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767023       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767174       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767191       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767199       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767842       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:31:52.767929       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.782886       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.782992       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.783008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.783073       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.783951       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:02.784044       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.799859       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.799956       1 main.go:227] handling current node
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.799981       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.799989       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.800455       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:13.136494    8536 command_runner.go:130] ! I0610 12:32:12.800616       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:13.139473    8536 logs.go:123] Gathering logs for kubelet ...
	I0610 12:32:13.139473    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 12:32:13.175674    8536 command_runner.go:130] > Jun 10 12:30:48 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.175674    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322075    1392 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:13.175674    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322142    1392 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.175674    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.324143    1392 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:13.175836    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: E0610 12:30:49.325228    1392 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:13.175836    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:13.175923    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:13.175923    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:13.175923    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.175994    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.176022    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078361    1448 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:13.176022    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078445    1448 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078696    1448 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: E0610 12:30:50.078819    1448 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:53 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021338    1528 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021853    1528 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.022286    1528 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.024650    1528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.040752    1528 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.082883    1528 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.083180    1528 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085143    1528 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085256    1528 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-813300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.086924    1528 topology_manager.go:138] "Creating topology manager with none policy"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.087122    1528 container_manager_linux.go:301] "Creating device plugin manager"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.088486    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.090915    1528 kubelet.go:400] "Attempting to sync node with API server"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091108    1528 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091402    1528 kubelet.go:312] "Adding apiserver pod source"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.092259    1528 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.097253    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.097520    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.099693    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.176053    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.099740    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.176595    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.099843    1528 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0610 12:32:13.176595    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.102710    1528 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0610 12:32:13.176641    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.103981    1528 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0610 12:32:13.176641    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.107194    1528 server.go:1264] "Started kubelet"
	I0610 12:32:13.176641    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.120692    1528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0610 12:32:13.176917    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.122088    1528 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0610 12:32:13.176979    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.125028    1528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.128857    1528 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.132449    1528 server.go:455] "Adding debug handlers to kubelet server"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.124281    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.137444    1528 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.139221    1528 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.141909    1528 factory.go:221] Registration of the systemd container factory successfully
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147241    1528 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147375    1528 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.144942    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="200ms"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.143108    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.154145    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.179909    1528 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180022    1528 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180086    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181162    1528 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181233    1528 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181261    1528 policy_none.go:49] "None policy: Start"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.192385    1528 reconciler.go:26] "Reconciler: start to sync state"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193179    1528 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193256    1528 state_mem.go:35] "Initializing new in-memory state store"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193830    1528 state_mem.go:75] "Updated machine memory state"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.197194    1528 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.204265    1528 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0610 12:32:13.177034    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.219894    1528 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0610 12:32:13.177562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.226098    1528 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-813300\" not found"
	I0610 12:32:13.177562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.226649    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.230123    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231021    1528 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231133    1528 kubelet.go:2337] "Starting kubelet main sync loop"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.231189    1528 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0610 12:32:13.177608    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.244084    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.177712    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.247037    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.177712    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.247227    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.177712    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.253607    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.255809    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334683    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62db1c721951a36c62a6369a30c651a661eb2871f8363fa341ef8ad7b7080a07"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334742    1528 topology_manager.go:215] "Topology Admit Handler" podUID="180cf4cc399d604c28cc4df1442ebd5a" podNamespace="kube-system" podName="kube-apiserver-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.336338    1528 topology_manager.go:215] "Topology Admit Handler" podUID="37865ce1914dc04a4a0a25e98b80ce35" podNamespace="kube-system" podName="kube-controller-manager-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.338106    1528 topology_manager.go:215] "Topology Admit Handler" podUID="4d9c84710aef19c4449f4b7691d0af07" podNamespace="kube-system" podName="kube-scheduler-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340794    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7d28a97ba1c48cbe8edd3eab76f64cdcdebf920a03921644f63d12856b642f0"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340848    1528 topology_manager.go:215] "Topology Admit Handler" podUID="76e8893277ba7cea6624561880496e47" podNamespace="kube-system" podName="etcd-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.341927    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04d7b3d4fcc648cd6b447a383defba86200f1071acc892670457ebeebb52f22"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.342208    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bc6043f7b92f091f4ceee7db3e11617072391c6e5303f4ecdafdb06d4b585a"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.356667    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="400ms"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.365771    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.380268    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b6aa9a0e1d1cbcee858808fc74f396cfba20777f2316093484920397e9b4ca"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397846    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-ca-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:13.177782    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397877    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:13.178367    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397922    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-flexvolume-dir\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.178443    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397961    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-k8s-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.178540    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397979    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-kubeconfig\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.178582    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398000    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-data\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:13.178582    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398019    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-k8s-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:13.178661    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398038    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-ca-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:13.178744    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398055    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d9c84710aef19c4449f4b7691d0af07-kubeconfig\") pod \"kube-scheduler-multinode-813300\" (UID: \"4d9c84710aef19c4449f4b7691d0af07\") " pod="kube-system/kube-scheduler-multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398073    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-certs\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.400870    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.416196    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689b8976cc0293bf6ae2ffaf7abbe0a59cfa7521907fd652e86da3912515d25d"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.442360    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10e49596de5e51f9986bebf2105f07084a083e5e8c2ab50684531210b032662"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.454932    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.456598    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.759421    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="800ms"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.858477    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.859580    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.205231    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.205310    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.248476    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.249836    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.406658    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.406731    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.487592    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99"
	I0610 12:32:13.178768    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.561164    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="1.6s"
	I0610 12:32:13.179301    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.661352    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.179301    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.663943    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:13.179349    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.751130    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.179422    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.751205    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:56 multinode-813300 kubelet[1528]: E0610 12:30:56.215699    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:57 multinode-813300 kubelet[1528]: I0610 12:30:57.265569    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636898    1528 kubelet_node_status.go:112] "Node was previously registered" node="multinode-813300"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636993    1528 kubelet_node_status.go:76] "Successfully registered node" node="multinode-813300"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.638685    1528 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639257    1528 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639985    1528 setters.go:580] "Node became not ready" node="multinode-813300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-10T12:30:59Z","lastTransitionTime":"2024-06-10T12:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.103240    1528 apiserver.go:52] "Watching apiserver"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109200    1528 topology_manager.go:215] "Topology Admit Handler" podUID="40bf0aff-00b2-40c7-bed7-52b8cadbc3a1" podNamespace="kube-system" podName="kube-proxy-nrpvt"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109472    1528 topology_manager.go:215] "Topology Admit Handler" podUID="aad8124e-6c05-4719-9adb-edc11b3cce42" podNamespace="kube-system" podName="kindnet-29gbv"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109721    1528 topology_manager.go:215] "Topology Admit Handler" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kbhvv"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109954    1528 topology_manager.go:215] "Topology Admit Handler" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e" podNamespace="kube-system" podName="storage-provisioner"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110077    1528 topology_manager.go:215] "Topology Admit Handler" podUID="3191c71a-8c87-4390-8232-8653f494d1f0" podNamespace="default" podName="busybox-fc5497c4f-z28tq"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.110308    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.179462    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110641    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-813300" podUID="f824b391-b3d2-49ec-ba7d-863cb2150f81"
	I0610 12:32:13.180016    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.111896    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:13.180016    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.115871    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.180086    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.147565    1528 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.155423    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160314    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6dfedc3-d6ff-412c-8a13-40a493c4199e-tmp\") pod \"storage-provisioner\" (UID: \"f6dfedc3-d6ff-412c-8a13-40a493c4199e\") " pod="kube-system/storage-provisioner"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160428    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-cni-cfg\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-xtables-lock\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161224    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-xtables-lock\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161359    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-lib-modules\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162089    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162182    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.662151031 +0000 UTC m=+6.753273950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.162238    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-lib-modules\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.175000    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-813300"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.186991    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187290    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.687498638 +0000 UTC m=+6.778621457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.246331    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f80d01e953cc664fc05c397fdad000" path="/var/lib/kubelet/pods/93f80d01e953cc664fc05c397fdad000/volumes"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.248399    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa7bd9cfb361baaed8d7d5729a6c77c" path="/var/lib/kubelet/pods/baa7bd9cfb361baaed8d7d5729a6c77c/volumes"
	I0610 12:32:13.180172    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.316426    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-813300" podStartSLOduration=0.316407314 podStartE2EDuration="316.407314ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.316147208 +0000 UTC m=+6.407270027" watchObservedRunningTime="2024-06-10 12:31:00.316407314 +0000 UTC m=+6.407530233"
	I0610 12:32:13.180723    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.439081    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-813300" podStartSLOduration=0.439018164 podStartE2EDuration="439.018164ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.409703778 +0000 UTC m=+6.500826597" watchObservedRunningTime="2024-06-10 12:31:00.439018164 +0000 UTC m=+6.530141083"
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.631684    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667882    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667966    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.667947638 +0000 UTC m=+7.759070557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769226    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769334    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769428    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.769408565 +0000 UTC m=+7.860531384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.231939    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.679952    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.680142    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.680120563 +0000 UTC m=+9.771243482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.781772    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.180779    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782050    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181342    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782132    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.7821123 +0000 UTC m=+9.873235219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181342    8536 command_runner.go:130] > Jun 10 12:31:02 multinode-813300 kubelet[1528]: E0610 12:31:02.234039    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181342    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.232296    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181466    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.701884    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.181496    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.702058    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.702037863 +0000 UTC m=+13.793160782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.181561    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802160    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181586    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802233    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181635    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802292    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.802272966 +0000 UTC m=+13.893395785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181700    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.207349    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.238069    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:05 multinode-813300 kubelet[1528]: E0610 12:31:05.232753    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:06 multinode-813300 kubelet[1528]: E0610 12:31:06.233804    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.231988    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736592    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736825    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.736801176 +0000 UTC m=+21.827923995 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837037    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837146    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837219    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.837199504 +0000 UTC m=+21.928322423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:08 multinode-813300 kubelet[1528]: E0610 12:31:08.232310    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.208416    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.231620    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:10 multinode-813300 kubelet[1528]: E0610 12:31:10.233882    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:11 multinode-813300 kubelet[1528]: E0610 12:31:11.232126    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:12 multinode-813300 kubelet[1528]: E0610 12:31:12.233695    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:13 multinode-813300 kubelet[1528]: E0610 12:31:13.231660    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.181786    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.210433    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.182312    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.234870    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182312    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.232790    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182403    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816637    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.182447    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816990    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.816931565 +0000 UTC m=+37.908054384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.182475    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918429    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.182475    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918619    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918694    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.918675278 +0000 UTC m=+38.009798097 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:16 multinode-813300 kubelet[1528]: E0610 12:31:16.234954    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:17 multinode-813300 kubelet[1528]: E0610 12:31:17.231668    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:18 multinode-813300 kubelet[1528]: E0610 12:31:18.232656    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.214153    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.231639    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:20 multinode-813300 kubelet[1528]: E0610 12:31:20.234429    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:21 multinode-813300 kubelet[1528]: E0610 12:31:21.232080    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:22 multinode-813300 kubelet[1528]: E0610 12:31:22.232638    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:23 multinode-813300 kubelet[1528]: E0610 12:31:23.233105    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.216593    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.233280    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:25 multinode-813300 kubelet[1528]: E0610 12:31:25.232513    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.182531    8536 command_runner.go:130] > Jun 10 12:31:26 multinode-813300 kubelet[1528]: E0610 12:31:26.232337    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.183087    8536 command_runner.go:130] > Jun 10 12:31:27 multinode-813300 kubelet[1528]: E0610 12:31:27.233152    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.183087    8536 command_runner.go:130] > Jun 10 12:31:28 multinode-813300 kubelet[1528]: E0610 12:31:28.234103    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.183193    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.218816    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.232070    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:30 multinode-813300 kubelet[1528]: E0610 12:31:30.231766    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.231673    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884791    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884975    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.884956587 +0000 UTC m=+69.976079506 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985181    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985216    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.98525598 +0000 UTC m=+70.076378799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.232018    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.476305    1528 scope.go:117] "RemoveContainer" containerID="d32ce22e31b06bacb7530f3513c1f864d77685269868404ad7c71a4f15d91e41"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.477175    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.477659    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f6dfedc3-d6ff-412c-8a13-40a493c4199e)\"" pod="kube-system/storage-provisioner" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:33 multinode-813300 kubelet[1528]: E0610 12:31:33.232631    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 kubelet[1528]: I0610 12:31:47.231895    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.214930    1528 scope.go:117] "RemoveContainer" containerID="34b9299d74e382eadb8e7df1029506efc87e283ac8b38024d9524b8bb815f705"
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: E0610 12:31:54.266189    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:13.183218    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:13.183951    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:13.183951    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:13.183951    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.275663    1528 scope.go:117] "RemoveContainer" containerID="ba52603f8387590319a4d5a9511265065e2f90bff6628bec2f622754e034c70a"
	I0610 12:32:13.230405    8536 logs.go:123] Gathering logs for coredns [24f3f7e041f9] ...
	I0610 12:32:13.230405    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f3f7e041f9"
	I0610 12:32:13.269497    8536 command_runner.go:130] > .:53
	I0610 12:32:13.269497    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:13.269497    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:13.269497    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:13.269497    8536 command_runner.go:130] > [INFO] 127.0.0.1:34387 - 41508 "HINFO IN 7171992165040069679.5605173313288368349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051230172s
	I0610 12:32:13.269497    8536 logs.go:123] Gathering logs for kube-controller-manager [3bee53d5fef9] ...
	I0610 12:32:13.269497    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bee53d5fef9"
	I0610 12:32:13.308290    8536 command_runner.go:130] ! I0610 12:30:56.976566       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:13.308360    8536 command_runner.go:130] ! I0610 12:30:58.260708       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:13.308360    8536 command_runner.go:130] ! I0610 12:30:58.260892       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:13.308360    8536 command_runner.go:130] ! I0610 12:30:58.266101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:13.308426    8536 command_runner.go:130] ! I0610 12:30:58.267393       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:13.308426    8536 command_runner.go:130] ! I0610 12:30:58.268203       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:13.308426    8536 command_runner.go:130] ! I0610 12:30:58.268377       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:13.308485    8536 command_runner.go:130] ! I0610 12:31:01.430160       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:13.308524    8536 command_runner.go:130] ! I0610 12:31:01.430459       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:13.308571    8536 command_runner.go:130] ! I0610 12:31:01.456745       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:13.308571    8536 command_runner.go:130] ! I0610 12:31:01.457409       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:13.308621    8536 command_runner.go:130] ! I0610 12:31:01.457489       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:13.308648    8536 command_runner.go:130] ! I0610 12:31:01.457839       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:13.308648    8536 command_runner.go:130] ! I0610 12:31:01.509226       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:13.308648    8536 command_runner.go:130] ! I0610 12:31:01.512712       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:13.308648    8536 command_runner.go:130] ! I0610 12:31:01.512947       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:13.308709    8536 command_runner.go:130] ! I0610 12:31:01.517463       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:13.308735    8536 command_runner.go:130] ! I0610 12:31:01.520424       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:13.308735    8536 command_runner.go:130] ! I0610 12:31:01.528150       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:13.308735    8536 command_runner.go:130] ! I0610 12:31:01.528371       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:13.308735    8536 command_runner.go:130] ! I0610 12:31:01.528506       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:13.308801    8536 command_runner.go:130] ! I0610 12:31:01.528651       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:13.308801    8536 command_runner.go:130] ! I0610 12:31:01.533407       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:13.308865    8536 command_runner.go:130] ! I0610 12:31:01.543133       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:13.308890    8536 command_runner.go:130] ! I0610 12:31:01.548293       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:13.308914    8536 command_runner.go:130] ! I0610 12:31:01.548310       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:13.308971    8536 command_runner.go:130] ! I0610 12:31:01.548473       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:13.308993    8536 command_runner.go:130] ! I0610 12:31:01.548492       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:13.308993    8536 command_runner.go:130] ! I0610 12:31:01.548660       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:13.309032    8536 command_runner.go:130] ! I0610 12:31:01.548672       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:13.309032    8536 command_runner.go:130] ! I0610 12:31:01.595194       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:13.309080    8536 command_runner.go:130] ! I0610 12:31:01.595266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:13.309100    8536 command_runner.go:130] ! I0610 12:31:01.595295       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:13.309100    8536 command_runner.go:130] ! I0610 12:31:01.595320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:13.309100    8536 command_runner.go:130] ! I0610 12:31:01.595340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:13.309100    8536 command_runner.go:130] ! I0610 12:31:01.595360       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:13.309203    8536 command_runner.go:130] ! I0610 12:31:01.595381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:13.309230    8536 command_runner.go:130] ! I0610 12:31:01.595402       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595465       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! W0610 12:31:01.595507       1 shared_informer.go:597] resyncPeriod 13h16m37.278540311s is smaller than resyncCheckPeriod 16h53m16.378760609s and the informer has already started. Changing it to 16h53m16.378760609s
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595706       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595754       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595782       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595923       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.595956       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597416       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597453       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597489       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597516       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597922       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.597937       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.598081       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.614277       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.614469       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.614504       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:13.309253    8536 command_runner.go:130] ! I0610 12:31:01.618176       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:13.309780    8536 command_runner.go:130] ! I0610 12:31:01.618586       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:13.309825    8536 command_runner.go:130] ! I0610 12:31:01.618885       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:13.309825    8536 command_runner.go:130] ! I0610 12:31:01.623374       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.624235       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.624265       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.629921       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.630154       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:13.309875    8536 command_runner.go:130] ! I0610 12:31:01.630164       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:13.309941    8536 command_runner.go:130] ! I0610 12:31:01.634130       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:13.309958    8536 command_runner.go:130] ! I0610 12:31:01.634452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:13.309958    8536 command_runner.go:130] ! I0610 12:31:01.634467       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:13.309958    8536 command_runner.go:130] ! I0610 12:31:01.639133       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.639154       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.639163       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.639622       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.639640       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.643940       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.644017       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.644031       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.652714       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.657163       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.657350       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:13.310020    8536 command_runner.go:130] ! E0610 12:31:01.664322       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.664388       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.694061       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.694262       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.694273       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.722911       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.725806       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.726026       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.734788       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.735047       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.735083       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.759990       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.761603       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.761772       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.769963       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.773525       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.773866       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.773998       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.778762       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.778803       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.778833       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.310020    8536 command_runner.go:130] ! I0610 12:31:01.779416       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:13.310552    8536 command_runner.go:130] ! I0610 12:31:01.779429       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:13.310552    8536 command_runner.go:130] ! I0610 12:31:01.779447       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.310595    8536 command_runner.go:130] ! I0610 12:31:01.780731       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:13.310595    8536 command_runner.go:130] ! I0610 12:31:01.782261       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:13.310595    8536 command_runner.go:130] ! I0610 12:31:01.783730       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:13.310654    8536 command_runner.go:130] ! I0610 12:31:01.782277       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.310654    8536 command_runner.go:130] ! I0610 12:31:01.782337       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:13.310654    8536 command_runner.go:130] ! I0610 12:31:01.784928       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:13.310749    8536 command_runner.go:130] ! I0610 12:31:01.782348       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:13.310749    8536 command_runner.go:130] ! I0610 12:31:11.813253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:13.310749    8536 command_runner.go:130] ! I0610 12:31:11.813374       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:13.310795    8536 command_runner.go:130] ! I0610 12:31:11.813998       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:13.310821    8536 command_runner.go:130] ! I0610 12:31:11.815397       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:13.310821    8536 command_runner.go:130] ! I0610 12:31:11.818405       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:13.310821    8536 command_runner.go:130] ! I0610 12:31:11.818514       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:13.310821    8536 command_runner.go:130] ! I0610 12:31:11.819007       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:13.310889    8536 command_runner.go:130] ! I0610 12:31:11.819350       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:13.310889    8536 command_runner.go:130] ! I0610 12:31:11.821748       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:13.310951    8536 command_runner.go:130] ! I0610 12:31:11.821802       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:13.310951    8536 command_runner.go:130] ! I0610 12:31:11.822113       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:13.310951    8536 command_runner.go:130] ! I0610 12:31:11.822204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:13.311047    8536 command_runner.go:130] ! I0610 12:31:11.822232       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:13.311047    8536 command_runner.go:130] ! I0610 12:31:11.826332       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:13.311047    8536 command_runner.go:130] ! I0610 12:31:11.826815       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:13.311047    8536 command_runner.go:130] ! I0610 12:31:11.826831       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:13.311099    8536 command_runner.go:130] ! E0610 12:31:11.830024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:13.311099    8536 command_runner.go:130] ! I0610 12:31:11.830417       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:13.311143    8536 command_runner.go:130] ! I0610 12:31:11.835752       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:13.311143    8536 command_runner.go:130] ! I0610 12:31:11.836296       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:13.311143    8536 command_runner.go:130] ! I0610 12:31:11.836330       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:13.311143    8536 command_runner.go:130] ! I0610 12:31:11.839311       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:13.311213    8536 command_runner.go:130] ! I0610 12:31:11.839512       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:13.311213    8536 command_runner.go:130] ! I0610 12:31:11.839590       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:13.311213    8536 command_runner.go:130] ! I0610 12:31:11.842028       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:13.311278    8536 command_runner.go:130] ! I0610 12:31:11.842220       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:13.311278    8536 command_runner.go:130] ! I0610 12:31:11.842603       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:13.311302    8536 command_runner.go:130] ! I0610 12:31:11.842639       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.845940       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.846359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.846982       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.849897       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.850381       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.850613       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.853131       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.853418       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.853675       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.856318       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.856441       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.856643       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.856381       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.902405       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.903166       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.906707       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.907117       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.907152       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.910144       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.910388       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.910498       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.913998       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.914276       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.915779       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.916916       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.917975       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.918292       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.930523       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.947621       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.948394       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.948768       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:13.311332    8536 command_runner.go:130] ! I0610 12:31:11.954911       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:13.311858    8536 command_runner.go:130] ! I0610 12:31:11.957486       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:13.311858    8536 command_runner.go:130] ! I0610 12:31:11.963420       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:13.311858    8536 command_runner.go:130] ! I0610 12:31:11.973610       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:13.311906    8536 command_runner.go:130] ! I0610 12:31:11.979167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:13.311906    8536 command_runner.go:130] ! I0610 12:31:11.980674       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:11.984963       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:11.985188       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:11.994612       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.003389       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.007898       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.011185       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.013303       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.014815       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.016632       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.016812       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.016947       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.017245       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.017927       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.018270       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.019668       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.019818       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.023667       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.024171       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.025888       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.026414       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.026742       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.026899       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.031613       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.035671       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:13.311938    8536 command_runner.go:130] ! I0610 12:31:12.038980       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:13.312491    8536 command_runner.go:130] ! I0610 12:31:12.040498       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.044612       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.044983       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.048630       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.048809       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.050934       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.051748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.77596ms"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.058669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.911µs"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.061957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.647762ms"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.062771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="326.05µs"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.074892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.074973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.075004       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.075594       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.130853       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.140823       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.147492       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.174418       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.201305       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.218626       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.243193       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.658052       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.658432       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:12.674720       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:31:42.085794       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.626500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.481917ms"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.626834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.891µs"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.653330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="217.376µs"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.704393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.856077ms"
	I0610 12:32:13.312636    8536 command_runner.go:130] ! I0610 12:32:06.705453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.995µs"
	I0610 12:32:15.836528    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:32:15.843887    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 200:
	ok
	I0610 12:32:15.843887    8536 round_trippers.go:463] GET https://172.17.150.144:8443/version
	I0610 12:32:15.843887    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:15.843887    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:15.843887    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:15.845987    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:15.845987    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Audit-Id: ab9a397a-32bc-4417-a374-81802ca7effc
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:15.847028    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:15.847028    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Content-Length: 263
	I0610 12:32:15.847028    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:15 GMT
	I0610 12:32:15.847028    8536 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 12:32:15.847182    8536 api_server.go:141] control plane version: v1.30.1
	I0610 12:32:15.847231    8536 api_server.go:131] duration metric: took 3.846962s to wait for apiserver health ...
	I0610 12:32:15.847275    8536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:32:15.858925    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 12:32:15.885859    8536 command_runner.go:130] > d7941126134f
	I0610 12:32:15.885859    8536 logs.go:276] 1 containers: [d7941126134f]
	I0610 12:32:15.901183    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 12:32:15.932778    8536 command_runner.go:130] > 877ee07c1499
	I0610 12:32:15.934782    8536 logs.go:276] 1 containers: [877ee07c1499]
	I0610 12:32:15.944487    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 12:32:15.969000    8536 command_runner.go:130] > 24f3f7e041f9
	I0610 12:32:15.970039    8536 command_runner.go:130] > f2e39052db19
	I0610 12:32:15.970075    8536 logs.go:276] 2 containers: [24f3f7e041f9 f2e39052db19]
	I0610 12:32:15.979387    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 12:32:16.008128    8536 command_runner.go:130] > d90e72ef4670
	I0610 12:32:16.009000    8536 command_runner.go:130] > bd1a6cd98743
	I0610 12:32:16.009000    8536 logs.go:276] 2 containers: [d90e72ef4670 bd1a6cd98743]
	I0610 12:32:16.018371    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 12:32:16.040409    8536 command_runner.go:130] > 1de5fa0ef838
	I0610 12:32:16.040409    8536 command_runner.go:130] > afad8b05897e
	I0610 12:32:16.042402    8536 logs.go:276] 2 containers: [1de5fa0ef838 afad8b05897e]
	I0610 12:32:16.052349    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 12:32:16.075451    8536 command_runner.go:130] > 3bee53d5fef9
	I0610 12:32:16.075451    8536 command_runner.go:130] > f1409bf44ff1
	I0610 12:32:16.076672    8536 logs.go:276] 2 containers: [3bee53d5fef9 f1409bf44ff1]
	I0610 12:32:16.086234    8536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 12:32:16.109689    8536 command_runner.go:130] > c3c4316beca6
	I0610 12:32:16.109689    8536 command_runner.go:130] > c39d54960e7d
	I0610 12:32:16.109689    8536 logs.go:276] 2 containers: [c3c4316beca6 c39d54960e7d]
	I0610 12:32:16.109689    8536 logs.go:123] Gathering logs for kube-scheduler [bd1a6cd98743] ...
	I0610 12:32:16.109689    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1a6cd98743"
	I0610 12:32:16.139341    8536 command_runner.go:130] ! I0610 12:07:55.711360       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:16.139341    8536 command_runner.go:130] ! W0610 12:07:57.417322       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:16.140127    8536 command_runner.go:130] ! W0610 12:07:57.417963       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.140274    8536 command_runner.go:130] ! W0610 12:07:57.418046       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:16.140274    8536 command_runner.go:130] ! W0610 12:07:57.418071       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:16.140329    8536 command_runner.go:130] ! I0610 12:07:57.459055       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:16.140329    8536 command_runner.go:130] ! I0610 12:07:57.460659       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.140329    8536 command_runner.go:130] ! I0610 12:07:57.464904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:16.140397    8536 command_runner.go:130] ! I0610 12:07:57.464952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:16.140397    8536 command_runner.go:130] ! I0610 12:07:57.466483       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:16.140397    8536 command_runner.go:130] ! I0610 12:07:57.466650       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.140472    8536 command_runner.go:130] ! W0610 12:07:57.502453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:16.140535    8536 command_runner.go:130] ! E0610 12:07:57.507264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:16.140573    8536 command_runner.go:130] ! W0610 12:07:57.503672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140606    8536 command_runner.go:130] ! W0610 12:07:57.506402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! W0610 12:07:57.506651       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.140777    8536 command_runner.go:130] ! W0610 12:07:57.506722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! W0610 12:07:57.507113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! W0610 12:07:57.507193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.511548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.511795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.512240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.512647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.515128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.515218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.515698       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.516017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.516332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.140777    8536 command_runner.go:130] ! E0610 12:07:57.516529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:16.141317    8536 command_runner.go:130] ! W0610 12:07:57.537276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:16.141363    8536 command_runner.go:130] ! E0610 12:07:57.537491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:16.141462    8536 command_runner.go:130] ! W0610 12:07:57.537680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:16.141462    8536 command_runner.go:130] ! E0610 12:07:57.538611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:16.141555    8536 command_runner.go:130] ! W0610 12:07:57.537622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:16.141555    8536 command_runner.go:130] ! E0610 12:07:57.538734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:16.141618    8536 command_runner.go:130] ! W0610 12:07:57.538013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:16.141681    8536 command_runner.go:130] ! E0610 12:07:57.539237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:16.141723    8536 command_runner.go:130] ! W0610 12:07:58.345815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:16.141769    8536 command_runner.go:130] ! E0610 12:07:58.345914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:32:16.141811    8536 command_runner.go:130] ! W0610 12:07:58.356843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:16.141811    8536 command_runner.go:130] ! E0610 12:07:58.357045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0610 12:32:16.141883    8536 command_runner.go:130] ! W0610 12:07:58.406587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.141883    8536 command_runner.go:130] ! E0610 12:07:58.406863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.141951    8536 command_runner.go:130] ! W0610 12:07:58.426795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142009    8536 command_runner.go:130] ! E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142038    8536 command_runner.go:130] ! W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.142038    8536 command_runner.go:130] ! E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.142098    8536 command_runner.go:130] ! W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:16.142158    8536 command_runner.go:130] ! E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0610 12:32:16.142192    8536 command_runner.go:130] ! W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:16.142223    8536 command_runner.go:130] ! E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 12:32:16.142283    8536 command_runner.go:130] ! W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:16.142314    8536 command_runner.go:130] ! E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:32:16.142341    8536 command_runner.go:130] ! I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:16.142341    8536 command_runner.go:130] ! E0610 12:28:16.298586       1 run.go:74] "command failed" err="finished without leader elect"
	I0610 12:32:16.159373    8536 logs.go:123] Gathering logs for kube-controller-manager [3bee53d5fef9] ...
	I0610 12:32:16.159373    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bee53d5fef9"
	I0610 12:32:16.189686    8536 command_runner.go:130] ! I0610 12:30:56.976566       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:16.190065    8536 command_runner.go:130] ! I0610 12:30:58.260708       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.260892       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.266101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.267393       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.268203       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:30:58.268377       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.430160       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.430459       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.456745       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.457409       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.457489       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.457839       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.509226       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.512712       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.512947       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.517463       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.520424       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.528150       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.528371       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.528506       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.528651       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.533407       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.543133       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548293       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548310       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548473       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548492       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548660       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:16.190150    8536 command_runner.go:130] ! I0610 12:31:01.548672       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:16.190716    8536 command_runner.go:130] ! I0610 12:31:01.595194       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595295       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595360       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595381       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595402       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595465       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! W0610 12:31:01.595507       1 shared_informer.go:597] resyncPeriod 13h16m37.278540311s is smaller than resyncCheckPeriod 16h53m16.378760609s and the informer has already started. Changing it to 16h53m16.378760609s
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595706       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595754       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595782       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:16.190766    8536 command_runner.go:130] ! I0610 12:31:01.595923       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:16.191305    8536 command_runner.go:130] ! I0610 12:31:01.595956       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:16.191305    8536 command_runner.go:130] ! I0610 12:31:01.597357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:16.191360    8536 command_runner.go:130] ! I0610 12:31:01.597416       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:16.191360    8536 command_runner.go:130] ! I0610 12:31:01.597453       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:16.191360    8536 command_runner.go:130] ! I0610 12:31:01.597489       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:16.191416    8536 command_runner.go:130] ! I0610 12:31:01.597516       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:16.191416    8536 command_runner.go:130] ! I0610 12:31:01.597922       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:16.191452    8536 command_runner.go:130] ! I0610 12:31:01.597937       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:16.191452    8536 command_runner.go:130] ! I0610 12:31:01.598081       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:16.191529    8536 command_runner.go:130] ! I0610 12:31:01.614277       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:16.191567    8536 command_runner.go:130] ! I0610 12:31:01.614469       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:16.191567    8536 command_runner.go:130] ! I0610 12:31:01.614504       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:16.191607    8536 command_runner.go:130] ! I0610 12:31:01.618176       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.618586       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.618885       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.623374       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.624235       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.624265       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.629921       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.630154       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.630164       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.634130       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.634452       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.634467       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639133       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639154       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639163       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639622       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.639640       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.643940       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.644017       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.644031       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.652714       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.657163       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.657350       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:16.191669    8536 command_runner.go:130] ! E0610 12:31:01.664322       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.664388       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.694061       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.694262       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.694273       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.722911       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.725806       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.726026       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.734788       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.735047       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:16.191669    8536 command_runner.go:130] ! I0610 12:31:01.735083       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:16.192201    8536 command_runner.go:130] ! I0610 12:31:01.759990       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:16.192201    8536 command_runner.go:130] ! I0610 12:31:01.761603       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:16.192242    8536 command_runner.go:130] ! I0610 12:31:01.761772       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:16.192242    8536 command_runner.go:130] ! I0610 12:31:01.769963       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:16.192242    8536 command_runner.go:130] ! I0610 12:31:01.773525       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:16.192309    8536 command_runner.go:130] ! I0610 12:31:01.773866       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:16.192309    8536 command_runner.go:130] ! I0610 12:31:01.773998       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:16.192342    8536 command_runner.go:130] ! I0610 12:31:01.778762       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:16.192342    8536 command_runner.go:130] ! I0610 12:31:01.778803       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:16.192342    8536 command_runner.go:130] ! I0610 12:31:01.778833       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.192342    8536 command_runner.go:130] ! I0610 12:31:01.779416       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:16.192460    8536 command_runner.go:130] ! I0610 12:31:01.779429       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:16.192460    8536 command_runner.go:130] ! I0610 12:31:01.779447       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.192460    8536 command_runner.go:130] ! I0610 12:31:01.780731       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:16.192507    8536 command_runner.go:130] ! I0610 12:31:01.782261       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:16.192507    8536 command_runner.go:130] ! I0610 12:31:01.783730       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:16.192557    8536 command_runner.go:130] ! I0610 12:31:01.782277       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.192557    8536 command_runner.go:130] ! I0610 12:31:01.782337       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:16.192639    8536 command_runner.go:130] ! I0610 12:31:01.784928       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:16.192639    8536 command_runner.go:130] ! I0610 12:31:01.782348       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.192711    8536 command_runner.go:130] ! I0610 12:31:11.813253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:16.192711    8536 command_runner.go:130] ! I0610 12:31:11.813374       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.813998       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.815397       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.818405       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.818514       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.819007       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.819350       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.821748       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.821802       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.822113       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.822204       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.822232       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.826332       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.826815       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.826831       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:16.192753    8536 command_runner.go:130] ! E0610 12:31:11.830024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.830417       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.835752       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.836296       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.836330       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.839311       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.839512       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.839590       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.842028       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.842220       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.842603       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.842639       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.845940       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.846359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.846982       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.849897       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.850381       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.850613       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:16.192753    8536 command_runner.go:130] ! I0610 12:31:11.853131       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:16.193275    8536 command_runner.go:130] ! I0610 12:31:11.853418       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:16.193275    8536 command_runner.go:130] ! I0610 12:31:11.853675       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:16.193336    8536 command_runner.go:130] ! I0610 12:31:11.856318       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:16.193336    8536 command_runner.go:130] ! I0610 12:31:11.856441       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:16.193336    8536 command_runner.go:130] ! I0610 12:31:11.856643       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:16.193384    8536 command_runner.go:130] ! I0610 12:31:11.856381       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:16.193451    8536 command_runner.go:130] ! I0610 12:31:11.902405       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:16.193451    8536 command_runner.go:130] ! I0610 12:31:11.903166       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:16.193451    8536 command_runner.go:130] ! I0610 12:31:11.906707       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:16.193504    8536 command_runner.go:130] ! I0610 12:31:11.907117       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:16.193504    8536 command_runner.go:130] ! I0610 12:31:11.907152       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:16.193548    8536 command_runner.go:130] ! I0610 12:31:11.910144       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:16.193548    8536 command_runner.go:130] ! I0610 12:31:11.910388       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:16.193592    8536 command_runner.go:130] ! I0610 12:31:11.910498       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:16.193592    8536 command_runner.go:130] ! I0610 12:31:11.913998       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:16.193633    8536 command_runner.go:130] ! I0610 12:31:11.914276       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:16.193633    8536 command_runner.go:130] ! I0610 12:31:11.915779       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:16.193685    8536 command_runner.go:130] ! I0610 12:31:11.916916       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:16.193685    8536 command_runner.go:130] ! I0610 12:31:11.917975       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.918292       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.930523       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.947621       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.948394       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.948768       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.954911       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.957486       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.963420       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.973610       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.979167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.980674       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.984963       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.985188       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:11.994612       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.003389       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.007898       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.011185       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.013303       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.014815       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.016632       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.016812       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.016947       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.017245       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.017927       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.018270       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.019668       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.019818       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.023667       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.024171       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.025888       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.026414       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.026742       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.026899       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.031613       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.035671       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.038980       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:16.193724    8536 command_runner.go:130] ! I0610 12:31:12.040498       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:16.194256    8536 command_runner.go:130] ! I0610 12:31:12.044612       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:16.194256    8536 command_runner.go:130] ! I0610 12:31:12.044983       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:16.194256    8536 command_runner.go:130] ! I0610 12:31:12.048630       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:16.194256    8536 command_runner.go:130] ! I0610 12:31:12.048809       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.050934       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.051748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.77596ms"
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.058669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.911µs"
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.061957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.647762ms"
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.062771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="326.05µs"
	I0610 12:32:16.194298    8536 command_runner.go:130] ! I0610 12:31:12.074892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:16.194397    8536 command_runner.go:130] ! I0610 12:31:12.074973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:16.194397    8536 command_runner.go:130] ! I0610 12:31:12.075004       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:16.194437    8536 command_runner.go:130] ! I0610 12:31:12.075594       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:16.194437    8536 command_runner.go:130] ! I0610 12:31:12.130853       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:16.194486    8536 command_runner.go:130] ! I0610 12:31:12.140823       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:16.194486    8536 command_runner.go:130] ! I0610 12:31:12.147492       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:16.194486    8536 command_runner.go:130] ! I0610 12:31:12.174418       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:16.194526    8536 command_runner.go:130] ! I0610 12:31:12.201305       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:16.194526    8536 command_runner.go:130] ! I0610 12:31:12.218626       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:16.194569    8536 command_runner.go:130] ! I0610 12:31:12.243193       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:16.194569    8536 command_runner.go:130] ! I0610 12:31:12.658052       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:16.194609    8536 command_runner.go:130] ! I0610 12:31:12.658432       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:16.194609    8536 command_runner.go:130] ! I0610 12:31:12.674720       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:16.194678    8536 command_runner.go:130] ! I0610 12:31:42.085794       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:16.194678    8536 command_runner.go:130] ! I0610 12:32:06.626500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.481917ms"
	I0610 12:32:16.194711    8536 command_runner.go:130] ! I0610 12:32:06.626834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.891µs"
	I0610 12:32:16.194781    8536 command_runner.go:130] ! I0610 12:32:06.653330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="217.376µs"
	I0610 12:32:16.194781    8536 command_runner.go:130] ! I0610 12:32:06.704393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.856077ms"
	I0610 12:32:16.194832    8536 command_runner.go:130] ! I0610 12:32:06.705453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.995µs"
	I0610 12:32:16.213494    8536 logs.go:123] Gathering logs for coredns [24f3f7e041f9] ...
	I0610 12:32:16.213494    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24f3f7e041f9"
	I0610 12:32:16.257020    8536 command_runner.go:130] > .:53
	I0610 12:32:16.257020    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:16.257020    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:16.257020    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:16.257020    8536 command_runner.go:130] > [INFO] 127.0.0.1:34387 - 41508 "HINFO IN 7171992165040069679.5605173313288368349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051230172s
	I0610 12:32:16.257020    8536 logs.go:123] Gathering logs for kindnet [c39d54960e7d] ...
	I0610 12:32:16.257020    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39d54960e7d"
	I0610 12:32:16.295302    8536 command_runner.go:130] ! I0610 12:12:45.866152       1 main.go:227] handling current node
	I0610 12:32:16.296336    8536 command_runner.go:130] ! I0610 12:12:45.866170       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.296336    8536 command_runner.go:130] ! I0610 12:12:45.866178       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297028    8536 command_runner.go:130] ! I0610 12:12:55.883210       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297171    8536 command_runner.go:130] ! I0610 12:12:55.883426       1 main.go:227] handling current node
	I0610 12:32:16.297569    8536 command_runner.go:130] ! I0610 12:12:55.883562       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:12:55.883686       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:05.893577       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:05.893734       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:05.893787       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:05.893797       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:15.902454       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:15.902590       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:15.902606       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:15.902614       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:25.917172       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:25.917277       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:25.917297       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:25.917305       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:35.933505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:35.933609       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:35.933623       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:35.933630       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:45.943963       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:45.944071       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:45.944089       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:45.944114       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:55.953212       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:55.953354       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:55.953371       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:13:55.953380       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:05.959968       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:05.960014       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:05.960029       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:05.960036       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:15.970279       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:15.970375       1 main.go:227] handling current node
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:15.970391       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.297681    8536 command_runner.go:130] ! I0610 12:14:15.970399       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298490    8536 command_runner.go:130] ! I0610 12:14:25.977769       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298490    8536 command_runner.go:130] ! I0610 12:14:25.977865       1 main.go:227] handling current node
	I0610 12:32:16.298490    8536 command_runner.go:130] ! I0610 12:14:25.977880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298490    8536 command_runner.go:130] ! I0610 12:14:25.977886       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:35.984527       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:35.984582       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:35.984596       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:35.984604       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:46.000499       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:46.000612       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:46.000635       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:46.000650       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:56.007468       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:56.007626       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:56.007642       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:14:56.007651       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:06.022181       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:06.022286       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:06.022302       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:06.022312       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:16.038901       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:16.038992       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:16.039008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:16.039016       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:26.062184       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:26.062279       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:26.062296       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:26.062304       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:36.071408       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:36.071540       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:36.071556       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:36.071564       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:46.078051       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:46.078158       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:46.078176       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:46.078184       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:56.086545       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:56.086647       1 main.go:227] handling current node
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:56.086663       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:15:56.086671       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.298610    8536 command_runner.go:130] ! I0610 12:16:06.094871       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299195    8536 command_runner.go:130] ! I0610 12:16:06.094920       1 main.go:227] handling current node
	I0610 12:32:16.299238    8536 command_runner.go:130] ! I0610 12:16:06.094935       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:06.094958       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:16.109713       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:16.110282       1 main.go:227] handling current node
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:16.110679       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:16.110879       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:26.124392       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299649    8536 command_runner.go:130] ! I0610 12:16:26.124492       1 main.go:227] handling current node
	I0610 12:32:16.299920    8536 command_runner.go:130] ! I0610 12:16:26.124507       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:26.124514       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:36.130696       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:36.130864       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:36.130880       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:36.130888       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:46.145505       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:46.145897       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:46.146067       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:46.146083       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:56.160466       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:56.160571       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:56.160586       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:16:56.160594       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:06.173930       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:06.173977       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:06.173992       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:06.173999       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:16.180797       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:16.180971       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:16.181005       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:16.181031       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:26.197081       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:26.197184       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:26.197201       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:26.197210       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:36.204586       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:36.204700       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:36.204716       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:36.204725       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:46.214904       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:46.215024       1 main.go:227] handling current node
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:46.215040       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:46.215048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:56.228072       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.299946    8536 command_runner.go:130] ! I0610 12:17:56.228173       1 main.go:227] handling current node
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:17:56.228189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:17:56.228197       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:18:06.237192       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:18:06.237303       1 main.go:227] handling current node
	I0610 12:32:16.300513    8536 command_runner.go:130] ! I0610 12:18:06.237329       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:06.237354       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:16.244574       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:16.244799       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:16.244837       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:16.244863       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:26.258608       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:26.258654       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:26.258669       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:26.258676       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:36.264620       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:36.264824       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:36.264841       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:36.264850       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:46.275317       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:46.275426       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:46.275460       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:46.275469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:56.290965       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:56.291027       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:56.291041       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:18:56.291048       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:06.298370       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:06.298512       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:06.298529       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:06.298537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:16.309110       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:16.309215       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:16.309232       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:16.309240       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:26.322583       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:26.322633       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:26.322647       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:26.322654       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:36.336250       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:36.336376       1 main.go:227] handling current node
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:36.336392       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:36.336400       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.300820    8536 command_runner.go:130] ! I0610 12:19:46.350996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301379    8536 command_runner.go:130] ! I0610 12:19:46.351137       1 main.go:227] handling current node
	I0610 12:32:16.301421    8536 command_runner.go:130] ! I0610 12:19:46.351155       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:46.351164       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:56.356996       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:56.357039       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:56.357052       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:19:56.357059       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:06.372114       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:06.372883       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:06.373032       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:06.373062       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:16.381023       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:16.381690       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:16.381940       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:16.381975       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:26.389178       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:26.389224       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:26.389240       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:26.389247       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:36.395687       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:36.395828       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:36.395844       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:36.395851       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:46.410656       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:46.410865       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:46.410882       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:46.410891       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:56.425296       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:56.425540       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:56.425625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:20:56.425639       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:06.439346       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:06.439393       1 main.go:227] handling current node
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:06.439406       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:06.439413       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:16.450424       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.301488    8536 command_runner.go:130] ! I0610 12:21:16.450594       1 main.go:227] handling current node
	I0610 12:32:16.302071    8536 command_runner.go:130] ! I0610 12:21:16.450628       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:16.450821       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:26.458379       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:26.458487       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:26.458503       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:26.458511       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:36.474243       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:36.474337       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:36.474354       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:36.474362       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:46.486635       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:46.486679       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:46.486693       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:46.486700       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:56.502256       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:56.502361       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:56.502377       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:21:56.502386       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:06.508796       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:06.508911       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:06.508928       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:06.508957       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:16.523863       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:16.523952       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:16.523970       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:16.523979       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:26.531516       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:26.531621       1 main.go:227] handling current node
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:26.531637       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:26.531645       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.302193    8536 command_runner.go:130] ! I0610 12:22:36.546403       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.302889    8536 command_runner.go:130] ! I0610 12:22:36.546510       1 main.go:227] handling current node
	I0610 12:32:16.302889    8536 command_runner.go:130] ! I0610 12:22:36.546525       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.302889    8536 command_runner.go:130] ! I0610 12:22:36.546533       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303110    8536 command_runner.go:130] ! I0610 12:22:46.603429       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:46.603565       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:46.603581       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:46.603590       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:56.619134       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:56.619253       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:56.619287       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:22:56.619296       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:06.634307       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:06.634399       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:06.634415       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:06.634424       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:16.649341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:16.649508       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:16.649527       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:16.649539       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:26.662421       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:26.662451       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:26.662462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:26.662468       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:36.669686       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:36.669734       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:36.669822       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:36.669831       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:46.678078       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:46.678194       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:46.678209       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:46.678217       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:56.685841       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:56.685884       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:56.685898       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:23:56.685905       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:06.692341       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:06.692609       1 main.go:227] handling current node
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:06.692699       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:06.692856       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.303223    8536 command_runner.go:130] ! I0610 12:24:16.700494       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.303826    8536 command_runner.go:130] ! I0610 12:24:16.700609       1 main.go:227] handling current node
	I0610 12:32:16.303871    8536 command_runner.go:130] ! I0610 12:24:16.700625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:16.700633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:26.716495       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:26.716609       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:26.716625       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:26.716633       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:36.723606       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:36.723716       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:36.723733       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:36.724254       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:46.739916       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:46.740008       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:46.740402       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:46.740432       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:56.759676       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:56.760848       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:56.760902       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:24:56.760914       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:06.771450       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:06.771514       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:06.771530       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:06.771537       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:16.778338       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:16.778445       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:16.778461       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:16.778469       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:26.791778       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:26.791933       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:26.791950       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:26.791974       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:36.800633       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:36.800842       1 main.go:227] handling current node
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:36.800860       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304010    8536 command_runner.go:130] ! I0610 12:25:36.800869       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304679    8536 command_runner.go:130] ! I0610 12:25:46.815290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304679    8536 command_runner.go:130] ! I0610 12:25:46.815339       1 main.go:227] handling current node
	I0610 12:32:16.304781    8536 command_runner.go:130] ! I0610 12:25:46.815355       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304863    8536 command_runner.go:130] ! I0610 12:25:46.815363       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.830374       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.830439       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.830471       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.830478       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.831222       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.831411       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:25:56.831494       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.840295       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.840446       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.840464       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.840913       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.845129       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:06.845329       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.860365       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.860476       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.860493       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.860502       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.861223       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:16.861379       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.873719       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.873964       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.874016       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.874181       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.874413       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:26.874451       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881254       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881366       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881382       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881407       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881814       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:36.881908       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900700       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900797       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900815       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900823       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:46.900985       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907290       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907395       1 main.go:227] handling current node
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907412       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907420       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907548       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.304946    8536 command_runner.go:130] ! I0610 12:26:56.907656       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922305       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922349       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922361       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922367       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922490       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:06.922515       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.929579       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.929687       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.929704       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.929712       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.930550       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:16.930641       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.944603       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.944719       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.944772       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.945138       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.945535       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:26.945625       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955188       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955329       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955581       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.955956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:36.956158       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.965590       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.965717       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.965826       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.965836       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.966598       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:46.966708       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:56.999276       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:56.999553       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:56.999711       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:56.999728       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:57.000088       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:27:57.000177       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015069       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015281       1 main.go:227] handling current node
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015300       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015308       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015707       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.305568    8536 command_runner.go:130] ! I0610 12:28:07.015928       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.324960    8536 logs.go:123] Gathering logs for kube-proxy [afad8b05897e] ...
	I0610 12:32:16.324960    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 afad8b05897e"
	I0610 12:32:16.355539    8536 command_runner.go:130] ! I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:16.355728    8536 command_runner.go:130] ! I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:32:16.355728    8536 command_runner.go:130] ! I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:16.355767    8536 command_runner.go:130] ! I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:16.355767    8536 command_runner.go:130] ! I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:16.355813    8536 command_runner.go:130] ! I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:16.355813    8536 command_runner.go:130] ! I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:16.355856    8536 command_runner.go:130] ! I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.355856    8536 command_runner.go:130] ! I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:32:16.355892    8536 command_runner.go:130] ! I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:16.355892    8536 command_runner.go:130] ! I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:16.355957    8536 command_runner.go:130] ! I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:16.356007    8536 command_runner.go:130] ! I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:32:16.356007    8536 command_runner.go:130] ! I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:16.356039    8536 command_runner.go:130] ! I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:16.356039    8536 command_runner.go:130] ! I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:16.356039    8536 command_runner.go:130] ! I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:16.359341    8536 logs.go:123] Gathering logs for kube-apiserver [d7941126134f] ...
	I0610 12:32:16.359514    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7941126134f"
	I0610 12:32:16.384533    8536 command_runner.go:130] ! I0610 12:30:56.783636       1 options.go:221] external host was not specified, using 172.17.150.144
	I0610 12:32:16.384533    8536 command_runner.go:130] ! I0610 12:30:56.802716       1 server.go:148] Version: v1.30.1
	I0610 12:32:16.384533    8536 command_runner.go:130] ! I0610 12:30:56.802771       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.385574    8536 command_runner.go:130] ! I0610 12:30:57.206580       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 12:32:16.385574    8536 command_runner.go:130] ! I0610 12:30:57.224598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:57.225809       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:57.226002       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:57.226365       1 instance.go:299] Using reconciler: lease
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:57.637999       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:57.638403       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.007103       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.008169       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.357732       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.553660       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.567826       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.567936       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.567947       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.569137       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.569236       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.570636       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.572063       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.572082       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.572088       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.575154       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.575194       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.576862       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.576966       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.576976       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.577920       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.578059       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.578305       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.579295       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.581807       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.581943       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! W0610 12:30:58.582127       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.386416    8536 command_runner.go:130] ! I0610 12:30:58.583254       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.583359       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.583370       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! I0610 12:30:58.594003       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.594046       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! I0610 12:30:58.597008       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.597028       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.597047       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387018    8536 command_runner.go:130] ! I0610 12:30:58.597658       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0610 12:32:16.387018    8536 command_runner.go:130] ! W0610 12:30:58.597679       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387194    8536 command_runner.go:130] ! W0610 12:30:58.597686       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387194    8536 command_runner.go:130] ! I0610 12:30:58.602889       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0610 12:32:16.387194    8536 command_runner.go:130] ! W0610 12:30:58.602907       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387194    8536 command_runner.go:130] ! W0610 12:30:58.602913       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387256    8536 command_runner.go:130] ! I0610 12:30:58.608646       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0610 12:32:16.387256    8536 command_runner.go:130] ! I0610 12:30:58.610262       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0610 12:32:16.387297    8536 command_runner.go:130] ! W0610 12:30:58.610275       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0610 12:32:16.387297    8536 command_runner.go:130] ! W0610 12:30:58.610281       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387297    8536 command_runner.go:130] ! I0610 12:30:58.619816       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0610 12:32:16.387297    8536 command_runner.go:130] ! W0610 12:30:58.619856       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0610 12:32:16.387297    8536 command_runner.go:130] ! W0610 12:30:58.619866       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0610 12:32:16.387392    8536 command_runner.go:130] ! I0610 12:30:58.627044       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0610 12:32:16.387392    8536 command_runner.go:130] ! W0610 12:30:58.627092       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387392    8536 command_runner.go:130] ! W0610 12:30:58.627296       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0610 12:32:16.387438    8536 command_runner.go:130] ! I0610 12:30:58.629017       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0610 12:32:16.387438    8536 command_runner.go:130] ! W0610 12:30:58.629067       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387438    8536 command_runner.go:130] ! I0610 12:30:58.659122       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0610 12:32:16.387438    8536 command_runner.go:130] ! W0610 12:30:58.659244       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0610 12:32:16.387438    8536 command_runner.go:130] ! I0610 12:30:59.341469       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:16.387522    8536 command_runner.go:130] ! I0610 12:30:59.341814       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:16.387522    8536 command_runner.go:130] ! I0610 12:30:59.341806       1 secure_serving.go:213] Serving securely on [::]:8443
	I0610 12:32:16.387522    8536 command_runner.go:130] ! I0610 12:30:59.342486       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0610 12:32:16.387604    8536 command_runner.go:130] ! I0610 12:30:59.342867       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0610 12:32:16.387604    8536 command_runner.go:130] ! I0610 12:30:59.342901       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0610 12:32:16.387604    8536 command_runner.go:130] ! I0610 12:30:59.342987       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.341865       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.344865       1 controller.go:116] Starting legacy_token_tracking_controller
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.344899       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.346737       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0610 12:32:16.387712    8536 command_runner.go:130] ! I0610 12:30:59.346910       1 available_controller.go:423] Starting AvailableConditionController
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.346960       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.347078       1 aggregator.go:163] waiting for initial CRD sync...
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.347170       1 controller.go:78] Starting OpenAPI AggregationController
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.347256       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0610 12:32:16.387816    8536 command_runner.go:130] ! I0610 12:30:59.347656       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0610 12:32:16.387895    8536 command_runner.go:130] ! I0610 12:30:59.347947       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0610 12:32:16.387895    8536 command_runner.go:130] ! I0610 12:30:59.348233       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.348295       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.341877       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.377996       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.378109       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.378362       1 controller.go:139] Starting OpenAPI controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.378742       1 controller.go:87] Starting OpenAPI V3 controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.378883       1 naming_controller.go:291] Starting NamingConditionController
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379043       1 establishing_controller.go:76] Starting EstablishingController
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379247       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379438       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379518       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379777       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.379999       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.524664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.525326       1 policy_source.go:224] refreshing policies
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.543486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.547084       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.548579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.549972       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.550011       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.551151       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.554229       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.560228       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578343       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578414       1 aggregator.go:165] initial CRD sync complete...
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578429       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.578466       1 cache.go:39] Caches are synced for autoregister controller
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:30:59.606740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:31:00.360768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 12:32:16.387958    8536 command_runner.go:130] ! W0610 12:31:00.893787       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.150.144]
	I0610 12:32:16.387958    8536 command_runner.go:130] ! I0610 12:31:00.913283       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:00.933946       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.471259       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.690867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.714405       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.840117       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 12:32:16.388504    8536 command_runner.go:130] ! I0610 12:31:02.856715       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 12:32:16.396774    8536 logs.go:123] Gathering logs for kube-scheduler [d90e72ef4670] ...
	I0610 12:32:16.396774    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90e72ef4670"
	I0610 12:32:16.422903    8536 command_runner.go:130] ! I0610 12:30:56.811878       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:16.422903    8536 command_runner.go:130] ! W0610 12:30:59.481898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0610 12:32:16.423247    8536 command_runner.go:130] ! W0610 12:30:59.482123       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 12:32:16.423345    8536 command_runner.go:130] ! W0610 12:30:59.482217       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0610 12:32:16.423417    8536 command_runner.go:130] ! W0610 12:30:59.482255       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:32:16.423491    8536 command_runner.go:130] ! I0610 12:30:59.514164       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:32:16.423517    8536 command_runner.go:130] ! I0610 12:30:59.514266       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.423582    8536 command_runner.go:130] ! I0610 12:30:59.518405       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:32:16.423600    8536 command_runner.go:130] ! I0610 12:30:59.518496       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:16.423600    8536 command_runner.go:130] ! I0610 12:30:59.518958       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:32:16.423668    8536 command_runner.go:130] ! I0610 12:30:59.519337       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.423720    8536 command_runner.go:130] ! I0610 12:30:59.619122       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:32:16.425942    8536 logs.go:123] Gathering logs for kube-proxy [1de5fa0ef838] ...
	I0610 12:32:16.425942    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1de5fa0ef838"
	I0610 12:32:16.452339    8536 command_runner.go:130] ! I0610 12:31:02.254962       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:32:16.453007    8536 command_runner.go:130] ! I0610 12:31:02.294630       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.150.144"]
	I0610 12:32:16.453007    8536 command_runner.go:130] ! I0610 12:31:02.403290       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:32:16.453041    8536 command_runner.go:130] ! I0610 12:31:02.403338       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:32:16.453041    8536 command_runner.go:130] ! I0610 12:31:02.403357       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.416009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.416300       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.416345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.424657       1 config.go:192] "Starting service config controller"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.425325       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.425369       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.425382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.432037       1 config.go:319] "Starting node config controller"
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.432075       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.535663       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.535744       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:32:16.453074    8536 command_runner.go:130] ! I0610 12:31:02.535786       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:32:16.456220    8536 logs.go:123] Gathering logs for kindnet [c3c4316beca6] ...
	I0610 12:32:16.456220    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c4316beca6"
	I0610 12:32:16.485136    8536 command_runner.go:130] ! I0610 12:31:02.264969       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 12:32:16.485448    8536 command_runner.go:130] ! I0610 12:31:02.265572       1 main.go:107] hostIP = 172.17.150.144
	I0610 12:32:16.485506    8536 command_runner.go:130] ! podIP = 172.17.150.144
	I0610 12:32:16.485506    8536 command_runner.go:130] ! I0610 12:31:02.265708       1 main.go:116] setting mtu 1500 for CNI 
	I0610 12:32:16.485506    8536 command_runner.go:130] ! I0610 12:31:02.265761       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 12:32:16.485506    8536 command_runner.go:130] ! I0610 12:31:02.265778       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 12:32:16.485584    8536 command_runner.go:130] ! I0610 12:31:32.684223       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0610 12:32:16.485621    8536 command_runner.go:130] ! I0610 12:31:32.703397       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.485658    8536 command_runner.go:130] ! I0610 12:31:32.703595       1 main.go:227] handling current node
	I0610 12:32:16.485658    8536 command_runner.go:130] ! I0610 12:31:32.742189       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.485658    8536 command_runner.go:130] ! I0610 12:31:32.742230       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:32.742783       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.151.128 Flags: [] Table: 0} 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:32.743097       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:32.743120       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:32.743193       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.144.46 Flags: [] Table: 0} 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750326       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750472       1 main.go:227] handling current node
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750487       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750494       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750648       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:42.750678       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:52.767023       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.485724    8536 command_runner.go:130] ! I0610 12:31:52.767174       1 main.go:227] handling current node
	I0610 12:32:16.485975    8536 command_runner.go:130] ! I0610 12:31:52.767191       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.486098    8536 command_runner.go:130] ! I0610 12:31:52.767199       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.486098    8536 command_runner.go:130] ! I0610 12:31:52.767842       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.486098    8536 command_runner.go:130] ! I0610 12:31:52.767929       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.486146    8536 command_runner.go:130] ! I0610 12:32:02.782886       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.486146    8536 command_runner.go:130] ! I0610 12:32:02.782992       1 main.go:227] handling current node
	I0610 12:32:16.486146    8536 command_runner.go:130] ! I0610 12:32:02.783008       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:02.783073       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:02.783951       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:02.784044       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:12.799859       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:32:16.486332    8536 command_runner.go:130] ! I0610 12:32:12.799956       1 main.go:227] handling current node
	I0610 12:32:16.486445    8536 command_runner.go:130] ! I0610 12:32:12.799981       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:32:16.486445    8536 command_runner.go:130] ! I0610 12:32:12.799989       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:32:16.486445    8536 command_runner.go:130] ! I0610 12:32:12.800455       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:32:16.486494    8536 command_runner.go:130] ! I0610 12:32:12.800616       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:32:16.491125    8536 logs.go:123] Gathering logs for Docker ...
	I0610 12:32:16.491205    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube cri-dockerd[222]: time="2024-06-10T12:29:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube cri-dockerd[409]: time="2024-06-10T12:29:19Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:16.524090    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:16.525155    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:16.525155    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:21 minikube cri-dockerd[429]: time="2024-06-10T12:29:21Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:21 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.525235    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0610 12:32:16.525334    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.525379    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0610 12:32:16.525379    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0610 12:32:16.525435    8536 command_runner.go:130] > Jun 10 12:29:23 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.525497    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:16.525556    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.665734294Z" level=info msg="Starting up"
	I0610 12:32:16.525556    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.666799026Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:16.525623    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[656]: time="2024-06-10T12:30:13.668025832Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0610 12:32:16.525807    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.707077561Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:16.525871    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745342414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:16.526003    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745425201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:16.526072    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745528085Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:16.526136    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.745580077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526136    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746319960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.526202    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746463837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746722696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746775088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746796184Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.746813182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526263    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.747203320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526809    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.748049086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.526856    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752393000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.526856    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752519780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.527009    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752692453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.527062    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.752790737Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:16.527062    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753305956Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:16.527103    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753420338Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:16.527212    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.753439135Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:16.527253    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759080243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:16.527292    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759316106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:16.527383    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759347801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:16.527425    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759374497Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:16.527490    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759392594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:16.527490    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759476281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:16.527546    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.759928509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:16.527546    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760128877Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:16.527610    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760824467Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:16.527687    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760850663Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:16.527752    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760867361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527752    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760883758Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527810    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760898556Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527810    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760914553Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527864    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760935350Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527864    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760951047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527922    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760966645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.527984    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.760986442Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.528044    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761064230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528044    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761105323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528044    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761128319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528126    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761143417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528126    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761158215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528187    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761173012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528187    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761187310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528252    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761210007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528252    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761455768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528316    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761477764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528373    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761493962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528373    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761507660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528491    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761522057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528491    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761538755Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:16.528557    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761561351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528619    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761583448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.528619    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761598445Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:16.528675    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761652437Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:16.528675    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761676833Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:16.528729    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761691230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:16.528867    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761709928Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:16.528927    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761721526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.529022    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761735324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:16.529075    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.761752021Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:16.529114    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762164056Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:16.529148    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762290536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:16.529148    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762532698Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:16.529236    8536 command_runner.go:130] > Jun 10 12:30:13 multinode-813300 dockerd[662]: time="2024-06-10T12:30:13.762557794Z" level=info msg="containerd successfully booted in 0.059804s"
	I0610 12:32:16.529236    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.723660372Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:16.529267    8536 command_runner.go:130] > Jun 10 12:30:14 multinode-813300 dockerd[656]: time="2024-06-10T12:30:14.979070633Z" level=info msg="Loading containers: start."
	I0610 12:32:16.529344    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.430556665Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:16.529379    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.525359393Z" level=info msg="Loading containers: done."
	I0610 12:32:16.529480    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.555368825Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:16.529480    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.556499190Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:16.529480    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614621979Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:16.529558    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 dockerd[656]: time="2024-06-10T12:30:15.614710469Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:16.529558    8536 command_runner.go:130] > Jun 10 12:30:15 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:16.529617    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.105858304Z" level=info msg="Processing signal 'terminated'"
	I0610 12:32:16.529617    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.107858244Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0610 12:32:16.529683    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 systemd[1]: Stopping Docker Application Container Engine...
	I0610 12:32:16.529740    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109274172Z" level=info msg="Daemon shutdown complete"
	I0610 12:32:16.529740    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109439076Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0610 12:32:16.529865    8536 command_runner.go:130] > Jun 10 12:30:44 multinode-813300 dockerd[656]: time="2024-06-10T12:30:44.109591179Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0610 12:32:16.529899    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: docker.service: Deactivated successfully.
	I0610 12:32:16.529930    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Stopped Docker Application Container Engine.
	I0610 12:32:16.529966    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 systemd[1]: Starting Docker Application Container Engine...
	I0610 12:32:16.530040    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.200932485Z" level=info msg="Starting up"
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.202989526Z" level=info msg="containerd not running, starting managed containerd"
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:45.204789062Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.250167169Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291799101Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291856902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291930003Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291948904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291983304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.291997405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292182308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292287811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292310511Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292322911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292350212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.292701119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530080    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.295953884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.530615    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296063086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 12:32:16.530660    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296411793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 12:32:16.530729    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296455694Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 12:32:16.530848    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296587396Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 12:32:16.530894    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296721299Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 12:32:16.530997    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296741600Z" level=info msg="metadata content store policy set" policy=shared
	I0610 12:32:16.531171    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.296941504Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 12:32:16.531254    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297027105Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 12:32:16.531297    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297046206Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 12:32:16.531368    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297078906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 12:32:16.531443    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297254610Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 12:32:16.531443    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297334111Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 12:32:16.531525    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.297955024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 12:32:16.531586    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298031825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 12:32:16.531586    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298071126Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 12:32:16.531586    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298090126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 12:32:16.531664    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298105527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531724    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298120527Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531724    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298155728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531724    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298172828Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531724    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298189828Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531822    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298204229Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531822    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298218329Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531822    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298230929Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 12:32:16.531940    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298260030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532031    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298281530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298296531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298318131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298333531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298494735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298514735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298529635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298592837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298610037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298624437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298639137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298652438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298669738Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298693539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298708139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298720839Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298773440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298792441Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298805041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298820841Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 12:32:16.532053    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298832741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298850742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.298862942Z" level=info msg="NRI interface is disabled by configuration."
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299109447Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299202249Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299272150Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 12:32:16.532610    8536 command_runner.go:130] > Jun 10 12:30:45 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:45.299312051Z" level=info msg="containerd successfully booted in 0.052836s"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.253253712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.287070988Z" level=info msg="Loading containers: start."
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.612574192Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.704084520Z" level=info msg="Loading containers: done."
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733112200Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.733256003Z" level=info msg="Daemon has completed initialization"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.788468006Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 systemd[1]: Started Docker Application Container Engine.
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:46 multinode-813300 dockerd[1052]: time="2024-06-10T12:30:46.790252742Z" level=info msg="API listen on [::]:2376"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start docker client with request timeout 0s"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Loaded network plugin cni"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:47Z" level=info msg="Start cri-dockerd grpc backend"
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:47 multinode-813300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-kbhvv_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c\""
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-z28tq_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d\""
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013449453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013587556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013608856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.013775860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.532752    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.087769538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089579074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.089879880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.090133785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183156944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183215145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183227346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533335    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.183318447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533506    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f56cc8af37db0f3fea8de363d927c6924c7ad7e81f4908f6f5c87d6c0db17a61/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.533561    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244245765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.533619    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244411968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.533619    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244427968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533671    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.244593672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.533873    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8902dac03acbce14b7e106bff482e591dd574972082943e9adda30969716a707/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.534025    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b13c0058ce265f3c4b18ec59cbb42b72803807a8d96330756114b2526fffa2de/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.534025    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.534131    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611175897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534168    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611296299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534168    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.611337700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534227    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.612109315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730665784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730725385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730738886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.730907689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848373736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.848822145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851216993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.851612501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900274973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900404876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900419576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 dockerd[1058]: time="2024-06-10T12:30:55.900508378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:30:59Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830014876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.830867993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831086098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.831510106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854754571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.854918174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.857723530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.858668949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534299    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.877394923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.534863    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878360042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.534863    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.878507645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534863    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:00.879086357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.534863    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d997d7c306c2a08fab9e0e53bd14a9da495d8b0abdad38c9935489b788eccd/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.535008    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2dd9b423841c9fee92dc2a884fe8f45fb9dd5b8713214ce8804ac8ced10629d1/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.535008    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337790622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535080    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337963526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535142    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.337992226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535142    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.338102629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535200    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.394005846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535200    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396505296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535265    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396667999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535265    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.396999105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535328    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:31:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c19b39e15f6ae82627ffedaf799ef63dd09554d65260dbfc8856b08a4ce7354/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.535389    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.711733694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535389    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712144402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535443    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712256705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535496    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:01.712964519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535496    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.980963328Z" level=info msg="shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:16.535552    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981035932Z" level=warning msg="cleaning up after shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	I0610 12:32:16.535605    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981047633Z" level=info msg="cleaning up dead shim" namespace=moby
	I0610 12:32:16.535605    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 dockerd[1052]: time="2024-06-10T12:31:31.981399154Z" level=info msg="ignoring event" container=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0610 12:32:16.535669    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.486941957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535721    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487165464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535778    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487187665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.488142597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345354892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345592698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345620799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345913706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.511059667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512286197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512437501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512775109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241c4811748facbb85003522d513039c3dfc5b38006b7f1cba90a5e411055e97/resolv.conf as [nameserver 172.17.144.1]"
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4d124cebb3b3affe7ace090f1a152544207db26621b5b4098cad87e3db47a4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955148547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955266050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955283650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955812861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.535811    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444723816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0610 12:32:16.536371    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444892597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0610 12:32:16.536371    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444914895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.536542    8536 command_runner.go:130] > Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.445846695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0610 12:32:16.570404    8536 logs.go:123] Gathering logs for describe nodes ...
	I0610 12:32:16.570404    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 12:32:16.791070    8536 command_runner.go:130] > Name:               multinode-813300
	I0610 12:32:16.791070    8536 command_runner.go:130] > Roles:              control-plane
	I0610 12:32:16.791070    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:16.791672    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:16.791732    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:16.791732    8536 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0610 12:32:16.791732    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	I0610 12:32:16.791792    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:16.791792    8536 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0610 12:32:16.791792    8536 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0610 12:32:16.791792    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:16.791792    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:16.791852    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:16.791852    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	I0610 12:32:16.791852    8536 command_runner.go:130] > Taints:             <none>
	I0610 12:32:16.791852    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:16.791928    8536 command_runner.go:130] > Lease:
	I0610 12:32:16.791928    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300
	I0610 12:32:16.791928    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:16.791928    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:32:10 +0000
	I0610 12:32:16.791982    8536 command_runner.go:130] > Conditions:
	I0610 12:32:16.791982    8536 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0610 12:32:16.791982    8536 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0610 12:32:16.791982    8536 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0610 12:32:16.792049    8536 command_runner.go:130] >   DiskPressure     False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0610 12:32:16.792049    8536 command_runner.go:130] >   PIDPressure      False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0610 12:32:16.792375    8536 command_runner.go:130] >   Ready            True    Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:31:40 +0000   KubeletReady                 kubelet is posting ready status
	I0610 12:32:16.792444    8536 command_runner.go:130] > Addresses:
	I0610 12:32:16.792444    8536 command_runner.go:130] >   InternalIP:  172.17.150.144
	I0610 12:32:16.792444    8536 command_runner.go:130] >   Hostname:    multinode-813300
	I0610 12:32:16.792444    8536 command_runner.go:130] > Capacity:
	I0610 12:32:16.792444    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.792502    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.792551    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.792551    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.792551    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.792551    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:16.792551    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.792633    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.792633    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.792633    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.792633    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.792633    8536 command_runner.go:130] > System Info:
	I0610 12:32:16.792633    8536 command_runner.go:130] >   Machine ID:                 8363a852b0fa420a8dccb009e6f4f9c7
	I0610 12:32:16.792633    8536 command_runner.go:130] >   System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Boot ID:                    a60b688f-6b78-4fa5-b21e-96a64e5c1047
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:16.792752    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:16.792752    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:16.792831    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:16.792831    8536 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0610 12:32:16.792831    8536 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0610 12:32:16.792831    8536 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0610 12:32:16.792831    8536 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:16.792899    8536 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0610 12:32:16.792899    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-z28tq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0610 12:32:16.792899    8536 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0610 12:32:16.792958    8536 command_runner.go:130] >   kube-system                 etcd-multinode-813300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	I0610 12:32:16.792958    8536 command_runner.go:130] >   kube-system                 kindnet-29gbv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0610 12:32:16.792958    8536 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-813300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	I0610 12:32:16.793049    8536 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-813300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0610 12:32:16.793087    8536 command_runner.go:130] >   kube-system                 kube-proxy-nrpvt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0610 12:32:16.793087    8536 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-813300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0610 12:32:16.793087    8536 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0610 12:32:16.793087    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:16.793087    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:16.793203    8536 command_runner.go:130] >   Resource           Requests     Limits
	I0610 12:32:16.793203    8536 command_runner.go:130] >   --------           --------     ------
	I0610 12:32:16.793248    8536 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0610 12:32:16.793248    8536 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0610 12:32:16.793248    8536 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0610 12:32:16.793248    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0610 12:32:16.793248    8536 command_runner.go:130] > Events:
	I0610 12:32:16.793248    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:16.793312    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:16.793312    8536 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0610 12:32:16.793312    8536 command_runner.go:130] >   Normal  Starting                 74s                kube-proxy       
	I0610 12:32:16.793352    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:16.793352    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:16.793400    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:16.793400    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:16.793400    8536 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0610 12:32:16.793400    8536 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:16.793459    8536 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-813300 status is now: NodeReady
	I0610 12:32:16.793459    8536 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0610 12:32:16.793459    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	I0610 12:32:16.793459    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	I0610 12:32:16.793550    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	I0610 12:32:16.793550    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:16.793550    8536 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	I0610 12:32:16.793608    8536 command_runner.go:130] > Name:               multinode-813300-m02
	I0610 12:32:16.793608    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:16.793608    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:16.793608    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:16.793608    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:16.793608    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m02
	I0610 12:32:16.793608    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	I0610 12:32:16.793683    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:16.793683    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:16.793746    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:16.793746    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:16.793746    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	I0610 12:32:16.793746    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:16.793746    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:16.793746    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:16.793809    8536 command_runner.go:130] > Lease:
	I0610 12:32:16.793809    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m02
	I0610 12:32:16.793809    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:16.793809    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:30 +0000
	I0610 12:32:16.793809    8536 command_runner.go:130] > Conditions:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:16.794015    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:16.794015    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.794015    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.794015    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.794015    8536 command_runner.go:130] > Addresses:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   InternalIP:  172.17.151.128
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Hostname:    multinode-813300-m02
	I0610 12:32:16.794015    8536 command_runner.go:130] > Capacity:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.794015    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.794015    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.794015    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.794015    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.794015    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.794015    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.794015    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.794015    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.794015    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.794015    8536 command_runner.go:130] > System Info:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	I0610 12:32:16.794015    8536 command_runner.go:130] >   System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:16.794015    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:16.794015    8536 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0610 12:32:16.794015    8536 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0610 12:32:16.794015    8536 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:16.794015    8536 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0610 12:32:16.794015    8536 command_runner.go:130] >   default                     busybox-fc5497c4f-czxmt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0610 12:32:16.794015    8536 command_runner.go:130] >   kube-system                 kindnet-r4nfq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0610 12:32:16.794015    8536 command_runner.go:130] >   kube-system                 kube-proxy-rx2b2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0610 12:32:16.794015    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:16.794015    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:16.794015    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:16.794574    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:16.794574    8536 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0610 12:32:16.794574    8536 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0610 12:32:16.794574    8536 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:16.794637    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:16.794637    8536 command_runner.go:130] > Events:
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0610 12:32:16.794637    8536 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:16.794637    8536 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:16.794750    8536 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-813300-m02 status is now: NodeReady
	I0610 12:32:16.794750    8536 command_runner.go:130] >   Normal  NodeNotReady             4m1s               node-controller  Node multinode-813300-m02 status is now: NodeNotReady
	I0610 12:32:16.794750    8536 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	I0610 12:32:16.794750    8536 command_runner.go:130] > Name:               multinode-813300-m03
	I0610 12:32:16.794750    8536 command_runner.go:130] > Roles:              <none>
	I0610 12:32:16.794836    8536 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     kubernetes.io/hostname=multinode-813300-m03
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     kubernetes.io/os=linux
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	I0610 12:32:16.794836    8536 command_runner.go:130] >                     minikube.k8s.io/name=multinode-813300
	I0610 12:32:16.794971    8536 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0610 12:32:16.794971    8536 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_10T12_25_53_0700
	I0610 12:32:16.794971    8536 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0610 12:32:16.794971    8536 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0610 12:32:16.794971    8536 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0610 12:32:16.795034    8536 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0610 12:32:16.795034    8536 command_runner.go:130] > CreationTimestamp:  Mon, 10 Jun 2024 12:25:52 +0000
	I0610 12:32:16.795034    8536 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0610 12:32:16.795034    8536 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0610 12:32:16.795034    8536 command_runner.go:130] > Unschedulable:      false
	I0610 12:32:16.795034    8536 command_runner.go:130] > Lease:
	I0610 12:32:16.795034    8536 command_runner.go:130] >   HolderIdentity:  multinode-813300-m03
	I0610 12:32:16.795093    8536 command_runner.go:130] >   AcquireTime:     <unset>
	I0610 12:32:16.795093    8536 command_runner.go:130] >   RenewTime:       Mon, 10 Jun 2024 12:27:04 +0000
	I0610 12:32:16.795093    8536 command_runner.go:130] > Conditions:
	I0610 12:32:16.795093    8536 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0610 12:32:16.795169    8536 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0610 12:32:16.795169    8536 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.795418    8536 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.795418    8536 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.795418    8536 command_runner.go:130] >   Ready            Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0610 12:32:16.795485    8536 command_runner.go:130] > Addresses:
	I0610 12:32:16.795485    8536 command_runner.go:130] >   InternalIP:  172.17.144.46
	I0610 12:32:16.795485    8536 command_runner.go:130] >   Hostname:    multinode-813300-m03
	I0610 12:32:16.795485    8536 command_runner.go:130] > Capacity:
	I0610 12:32:16.795485    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.795485    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.795554    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.795554    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.795554    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.795554    8536 command_runner.go:130] > Allocatable:
	I0610 12:32:16.795554    8536 command_runner.go:130] >   cpu:                2
	I0610 12:32:16.795554    8536 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0610 12:32:16.795554    8536 command_runner.go:130] >   hugepages-2Mi:      0
	I0610 12:32:16.795612    8536 command_runner.go:130] >   memory:             2164264Ki
	I0610 12:32:16.795612    8536 command_runner.go:130] >   pods:               110
	I0610 12:32:16.795612    8536 command_runner.go:130] > System Info:
	I0610 12:32:16.795612    8536 command_runner.go:130] >   Machine ID:                 2d60e1f6e3b2454db505a650eae61212
	I0610 12:32:16.795612    8536 command_runner.go:130] >   System UUID:                b38b4a9a-39f6-6f43-9e6d-19433dc62cd9
	I0610 12:32:16.795676    8536 command_runner.go:130] >   Boot ID:                    0a419483-5289-4d17-96c2-fd4487360412
	I0610 12:32:16.795676    8536 command_runner.go:130] >   Kernel Version:             5.10.207
	I0610 12:32:16.795676    8536 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0610 12:32:16.795676    8536 command_runner.go:130] >   Operating System:           linux
	I0610 12:32:16.795676    8536 command_runner.go:130] >   Architecture:               amd64
	I0610 12:32:16.795736    8536 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0610 12:32:16.795736    8536 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0610 12:32:16.795736    8536 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0610 12:32:16.795736    8536 command_runner.go:130] > PodCIDR:                      10.244.2.0/24
	I0610 12:32:16.795736    8536 command_runner.go:130] > PodCIDRs:                     10.244.2.0/24
	I0610 12:32:16.795799    8536 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0610 12:32:16.795799    8536 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0610 12:32:16.795799    8536 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0610 12:32:16.795799    8536 command_runner.go:130] >   kube-system                 kindnet-2pc4j       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m24s
	I0610 12:32:16.795856    8536 command_runner.go:130] >   kube-system                 kube-proxy-vw56h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	I0610 12:32:16.795856    8536 command_runner.go:130] > Allocated resources:
	I0610 12:32:16.795856    8536 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0610 12:32:16.795856    8536 command_runner.go:130] >   Resource           Requests   Limits
	I0610 12:32:16.795856    8536 command_runner.go:130] >   --------           --------   ------
	I0610 12:32:16.795856    8536 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0610 12:32:16.795920    8536 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0610 12:32:16.795920    8536 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:16.795920    8536 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0610 12:32:16.795920    8536 command_runner.go:130] > Events:
	I0610 12:32:16.795920    8536 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0610 12:32:16.795979    8536 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0610 12:32:16.795979    8536 command_runner.go:130] >   Normal  Starting                 6m11s                  kube-proxy       
	I0610 12:32:16.796063    8536 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m24s (x2 over 6m24s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientMemory
	I0610 12:32:16.796088    8536 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m24s (x2 over 6m24s)  kubelet          Node multinode-813300-m03 status is now: NodeHasNoDiskPressure
	I0610 12:32:16.796129    8536 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m24s (x2 over 6m24s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientPID
	I0610 12:32:16.796129    8536 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	I0610 12:32:16.796129    8536 command_runner.go:130] >   Normal  RegisteredNode           6m22s                  node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:16.796190    8536 command_runner.go:130] >   Normal  NodeReady                6m3s                   kubelet          Node multinode-813300-m03 status is now: NodeReady
	I0610 12:32:16.796190    8536 command_runner.go:130] >   Normal  NodeNotReady             4m32s                  node-controller  Node multinode-813300-m03 status is now: NodeNotReady
	I0610 12:32:16.796190    8536 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	I0610 12:32:16.807247    8536 logs.go:123] Gathering logs for dmesg ...
	I0610 12:32:16.807247    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 12:32:16.833337    8536 command_runner.go:130] > [Jun10 12:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.132459] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.024371] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.082449] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.022513] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0610 12:32:16.833337    8536 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +5.764981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +1.334692] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +1.227872] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +7.275008] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0610 12:32:16.833337    8536 command_runner.go:130] > [Jun10 12:30] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.213819] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [ +29.247267] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.109477] kauditd_printk_skb: 73 callbacks suppressed
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.638576] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.214581] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.255487] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +3.027967] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.239865] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0610 12:32:16.833337    8536 command_runner.go:130] > [  +0.216732] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0610 12:32:16.833895    8536 command_runner.go:130] > [  +0.314976] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0610 12:32:16.833895    8536 command_runner.go:130] > [  +0.112938] kauditd_printk_skb: 183 callbacks suppressed
	I0610 12:32:16.833895    8536 command_runner.go:130] > [  +0.871081] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +5.053506] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +0.123809] kauditd_printk_skb: 34 callbacks suppressed
	I0610 12:32:16.833945    8536 command_runner.go:130] > [Jun10 12:31] kauditd_printk_skb: 62 callbacks suppressed
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +3.513215] hrtimer: interrupt took 368589 ns
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +0.107277] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0610 12:32:16.833945    8536 command_runner.go:130] > [  +7.541664] kauditd_printk_skb: 70 callbacks suppressed
	I0610 12:32:16.836396    8536 logs.go:123] Gathering logs for etcd [877ee07c1499] ...
	I0610 12:32:16.836431    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877ee07c1499"
	I0610 12:32:16.866563    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.207374Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208407Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.17.150.144:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.17.150.144:2380","--initial-cluster=multinode-813300=https://172.17.150.144:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.17.150.144:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.17.150.144:2380","--name=multinode-813300","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208499Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.208577Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208593Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.17.150.144:2380"]}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.208715Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.218326Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"]}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.22047Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-813300","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"ini
tial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.244201Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.944438ms"}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.274404Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0610 12:32:16.866722    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.303075Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","commit-index":1913}
	I0610 12:32:16.867380    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=()"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became follower at term 2"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.304219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8f4442f54c46fb8d [peers: [], term: 2, commit: 1913, applied: 0, lastindex: 1913, lastterm: 2]"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"warn","ts":"2024-06-10T12:30:56.318917Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.323726Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1273}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.328272Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1642}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.335671Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.347777Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8f4442f54c46fb8d","timeout":"7s"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.349755Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8f4442f54c46fb8d"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.350228Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8f4442f54c46fb8d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.352715Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.36067Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361057Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.361302Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=(10323449867154160525)"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.363612Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","added-peer-id":"8f4442f54c46fb8d","added-peer-peer-urls":["https://172.17.159.171:2380"]}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364067Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.364306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.367772Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.373962Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.150.144:2380"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.374209Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.150.144:2380"}
	I0610 12:32:16.867431    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375497Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8f4442f54c46fb8d","initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0610 12:32:16.867993    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:56.375805Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d is starting a new election at term 2"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.50539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became pre-candidate at term 2"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgPreVoteResp from 8f4442f54c46fb8d at term 2"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.505801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became candidate at term 3"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgVoteResp from 8f4442f54c46fb8d at term 3"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 3"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.506586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 3"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.511486Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.150.144:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.512682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.517481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.520973Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0610 12:32:16.868024    8536 command_runner.go:130] ! {"level":"info","ts":"2024-06-10T12:30:57.543402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.150.144:2379"}
	I0610 12:32:16.874308    8536 logs.go:123] Gathering logs for coredns [f2e39052db19] ...
	I0610 12:32:16.874308    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e39052db19"
	I0610 12:32:16.905717    8536 command_runner.go:130] > .:53
	I0610 12:32:16.905778    8536 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	I0610 12:32:16.905778    8536 command_runner.go:130] > CoreDNS-1.11.1
	I0610 12:32:16.905778    8536 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0610 12:32:16.905841    8536 command_runner.go:130] > [INFO] 127.0.0.1:46276 - 35337 "HINFO IN 965239639799927989.3587586823131848737. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.052340371s
	I0610 12:32:16.905841    8536 command_runner.go:130] > [INFO] 10.244.1.2:36040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003047s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.1.2:51901 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.165635405s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.1.2:38890 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.065664181s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.1.2:40219 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.107303974s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.0.3:38184 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002396s
	I0610 12:32:16.905895    8536 command_runner.go:130] > [INFO] 10.244.0.3:57966 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001307s
	I0610 12:32:16.905952    8536 command_runner.go:130] > [INFO] 10.244.0.3:38338 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002131s
	I0610 12:32:16.906008    8536 command_runner.go:130] > [INFO] 10.244.0.3:41898 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000121s
	I0610 12:32:16.906008    8536 command_runner.go:130] > [INFO] 10.244.1.2:49043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200101s
	I0610 12:32:16.906008    8536 command_runner.go:130] > [INFO] 10.244.1.2:53918 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.147842886s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:50531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001726s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:41881 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001246s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:34708 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.030026838s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002834s
	I0610 12:32:16.906055    8536 command_runner.go:130] > [INFO] 10.244.1.2:58166 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001901s
	I0610 12:32:16.906193    8536 command_runner.go:130] > [INFO] 10.244.1.2:46174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001048s
	I0610 12:32:16.906399    8536 command_runner.go:130] > [INFO] 10.244.0.3:52212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003513s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	I0610 12:32:16.906456    8536 command_runner.go:130] > [INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	I0610 12:32:16.906589    8536 command_runner.go:130] > [INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	I0610 12:32:16.906589    8536 command_runner.go:130] > [INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	I0610 12:32:16.906708    8536 command_runner.go:130] > [INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	I0610 12:32:16.906751    8536 command_runner.go:130] > [INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	I0610 12:32:16.906836    8536 command_runner.go:130] > [INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	I0610 12:32:16.906877    8536 command_runner.go:130] > [INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	I0610 12:32:16.906910    8536 command_runner.go:130] > [INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	I0610 12:32:16.906910    8536 command_runner.go:130] > [INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	I0610 12:32:16.906946    8536 command_runner.go:130] > [INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	I0610 12:32:16.907003    8536 command_runner.go:130] > [INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	I0610 12:32:16.907003    8536 command_runner.go:130] > [INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	I0610 12:32:16.907003    8536 command_runner.go:130] > [INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	I0610 12:32:16.907047    8536 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0610 12:32:16.907047    8536 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0610 12:32:16.910637    8536 logs.go:123] Gathering logs for kube-controller-manager [f1409bf44ff1] ...
	I0610 12:32:16.910637    8536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1409bf44ff1"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:55.502430       1 serving.go:380] Generated self-signed cert in-memory
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.114557       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.114858       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.117078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.117365       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.118392       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:07:56.118623       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.413505       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.413532       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.454038       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.454303       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.454341       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.474947       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.475105       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.475116       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:00.514703       1 shared_informer.go:320] Caches are synced for tokens
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:10.509914       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0610 12:32:16.938932    8536 command_runner.go:130] ! I0610 12:08:10.510020       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0610 12:32:16.939467    8536 command_runner.go:130] ! I0610 12:08:10.511115       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0610 12:32:16.939467    8536 command_runner.go:130] ! I0610 12:08:10.511148       1 shared_informer.go:313] Waiting for caches to sync for node
	I0610 12:32:16.939527    8536 command_runner.go:130] ! I0610 12:08:10.515475       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0610 12:32:16.939527    8536 command_runner.go:130] ! I0610 12:08:10.515547       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0610 12:32:16.939527    8536 command_runner.go:130] ! I0610 12:08:10.516308       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0610 12:32:16.939527    8536 command_runner.go:130] ! I0610 12:08:10.516334       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0610 12:32:16.939593    8536 command_runner.go:130] ! I0610 12:08:10.516340       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0610 12:32:16.939593    8536 command_runner.go:130] ! I0610 12:08:10.531416       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0610 12:32:16.939593    8536 command_runner.go:130] ! I0610 12:08:10.531434       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0610 12:32:16.939652    8536 command_runner.go:130] ! I0610 12:08:10.531293       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0610 12:32:16.939652    8536 command_runner.go:130] ! I0610 12:08:10.543960       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0610 12:32:16.939652    8536 command_runner.go:130] ! I0610 12:08:10.544630       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.544667       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.567000       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.567602       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.568240       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.586627       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0610 12:32:16.939720    8536 command_runner.go:130] ! I0610 12:08:10.587637       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0610 12:32:16.939863    8536 command_runner.go:130] ! I0610 12:08:10.587654       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0610 12:32:16.939892    8536 command_runner.go:130] ! I0610 12:08:10.623685       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 12:32:16.939930    8536 command_runner.go:130] ! I0610 12:08:10.623975       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.624342       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.639985       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.640617       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.640810       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.702326       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.706246       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.711937       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.712131       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.712146       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.712235       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.712265       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.724980       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.726393       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.726653       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.742390       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.743099       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.744498       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.759177       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.759262       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.759917       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.759932       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.901245       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.903470       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:10.903502       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.064066       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.064123       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.064135       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.202164       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.202227       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0610 12:32:16.939971    8536 command_runner.go:130] ! I0610 12:08:11.202239       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0610 12:32:16.940753    8536 command_runner.go:130] ! I0610 12:08:11.352380       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0610 12:32:16.940816    8536 command_runner.go:130] ! I0610 12:08:11.352546       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0610 12:32:16.940847    8536 command_runner.go:130] ! I0610 12:08:11.352575       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0610 12:32:16.940847    8536 command_runner.go:130] ! I0610 12:08:11.656918       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0610 12:32:16.940888    8536 command_runner.go:130] ! I0610 12:08:11.657560       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0610 12:32:16.940888    8536 command_runner.go:130] ! I0610 12:08:11.657950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0610 12:32:16.940923    8536 command_runner.go:130] ! I0610 12:08:11.658269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658437       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658699       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658785       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658822       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658849       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658870       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658895       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658915       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.658987       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.659004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0610 12:32:16.940964    8536 command_runner.go:130] ! I0610 12:08:11.659056       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0610 12:32:16.941493    8536 command_runner.go:130] ! W0610 12:08:11.659073       1 shared_informer.go:597] resyncPeriod 13h6m28.341601393s is smaller than resyncCheckPeriod 19h0m49.916968618s and the informer has already started. Changing it to 19h0m49.916968618s
	I0610 12:32:16.941560    8536 command_runner.go:130] ! I0610 12:08:11.659195       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0610 12:32:16.941560    8536 command_runner.go:130] ! I0610 12:08:11.659214       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0610 12:32:16.941617    8536 command_runner.go:130] ! I0610 12:08:11.659236       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0610 12:32:16.941669    8536 command_runner.go:130] ! I0610 12:08:11.659287       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.659312       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.659579       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.659591       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.659608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.895313       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.895383       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.895693       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:11.896490       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.154521       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.154576       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.154658       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.154690       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.301351       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.301495       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.301508       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.495309       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.495425       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.495645       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.495683       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0610 12:32:16.941699    8536 command_runner.go:130] ! E0610 12:08:12.550245       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.550671       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! E0610 12:08:12.700493       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 12:32:16.941699    8536 command_runner.go:130] ! I0610 12:08:12.700528       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 12:32:16.942439    8536 command_runner.go:130] ! I0610 12:08:12.700538       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0610 12:32:16.942494    8536 command_runner.go:130] ! I0610 12:08:12.859280       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0610 12:32:16.942535    8536 command_runner.go:130] ! I0610 12:08:12.859580       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0610 12:32:16.942535    8536 command_runner.go:130] ! I0610 12:08:12.859953       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0610 12:32:16.942602    8536 command_runner.go:130] ! I0610 12:08:12.906626       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0610 12:32:16.942641    8536 command_runner.go:130] ! I0610 12:08:12.907724       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0610 12:32:16.942696    8536 command_runner.go:130] ! I0610 12:08:13.050431       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0610 12:32:16.942729    8536 command_runner.go:130] ! I0610 12:08:13.050510       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0610 12:32:16.942800    8536 command_runner.go:130] ! I0610 12:08:13.205885       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0610 12:32:16.943076    8536 command_runner.go:130] ! I0610 12:08:13.205970       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0610 12:32:16.943123    8536 command_runner.go:130] ! I0610 12:08:13.205982       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0610 12:32:16.943176    8536 command_runner.go:130] ! I0610 12:08:13.351713       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0610 12:32:16.943176    8536 command_runner.go:130] ! I0610 12:08:13.351815       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0610 12:32:16.943264    8536 command_runner.go:130] ! I0610 12:08:13.351830       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0610 12:32:16.943315    8536 command_runner.go:130] ! I0610 12:08:13.603420       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 12:32:16.943315    8536 command_runner.go:130] ! I0610 12:08:13.603498       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 12:32:16.943315    8536 command_runner.go:130] ! I0610 12:08:13.603510       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0610 12:32:16.943404    8536 command_runner.go:130] ! I0610 12:08:13.750262       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.750789       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.750809       1 shared_informer.go:313] Waiting for caches to sync for job
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.900118       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.900639       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0610 12:32:16.943443    8536 command_runner.go:130] ! I0610 12:08:13.900897       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0610 12:32:16.943588    8536 command_runner.go:130] ! I0610 12:08:14.054008       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.054156       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.054170       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.199527       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.199627       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.199683       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.199694       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.351474       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.352193       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.352213       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502148       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502250       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502262       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502696       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.502825       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.546684       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.547077       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.547608       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0610 12:32:16.943626    8536 command_runner.go:130] ! I0610 12:08:14.547097       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.944119    8536 command_runner.go:130] ! I0610 12:08:14.547127       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0610 12:32:16.945670    8536 command_runner.go:130] ! I0610 12:08:14.547931       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0610 12:32:16.945732    8536 command_runner.go:130] ! I0610 12:08:14.547138       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.945784    8536 command_runner.go:130] ! I0610 12:08:14.547188       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.548434       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.547199       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.547257       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.548692       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.547265       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.558628       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0610 12:32:16.946533    8536 command_runner.go:130] ! I0610 12:08:14.590023       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300\" does not exist"
	I0610 12:32:16.946749    8536 command_runner.go:130] ! I0610 12:08:14.600506       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 12:32:16.946749    8536 command_runner.go:130] ! I0610 12:08:14.602694       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 12:32:16.946776    8536 command_runner.go:130] ! I0610 12:08:14.603324       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0610 12:32:16.946776    8536 command_runner.go:130] ! I0610 12:08:14.609611       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:32:16.946824    8536 command_runner.go:130] ! I0610 12:08:14.612038       1 shared_informer.go:320] Caches are synced for node
	I0610 12:32:16.946824    8536 command_runner.go:130] ! I0610 12:08:14.623629       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0610 12:32:16.946824    8536 command_runner.go:130] ! I0610 12:08:14.624495       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 12:32:16.946824    8536 command_runner.go:130] ! I0610 12:08:14.612329       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.628289       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.630516       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.630648       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.622860       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 12:32:16.946885    8536 command_runner.go:130] ! I0610 12:08:14.627541       1 shared_informer.go:320] Caches are synced for endpoint
	I0610 12:32:16.946964    8536 command_runner.go:130] ! I0610 12:08:14.627554       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0610 12:32:16.946964    8536 command_runner.go:130] ! I0610 12:08:14.627562       1 shared_informer.go:320] Caches are synced for namespace
	I0610 12:32:16.947022    8536 command_runner.go:130] ! I0610 12:08:14.627813       1 shared_informer.go:320] Caches are synced for taint
	I0610 12:32:16.947022    8536 command_runner.go:130] ! I0610 12:08:14.631141       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 12:32:16.947069    8536 command_runner.go:130] ! I0610 12:08:14.631364       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:32:16.947069    8536 command_runner.go:130] ! I0610 12:08:14.631669       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:16.947069    8536 command_runner.go:130] ! I0610 12:08:14.631834       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 12:32:16.947069    8536 command_runner.go:130] ! I0610 12:08:14.642451       1 shared_informer.go:320] Caches are synced for PV protection
	I0610 12:32:16.947134    8536 command_runner.go:130] ! I0610 12:08:14.644828       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0610 12:32:16.947134    8536 command_runner.go:130] ! I0610 12:08:14.645380       1 shared_informer.go:320] Caches are synced for GC
	I0610 12:32:16.947177    8536 command_runner.go:130] ! I0610 12:08:14.647678       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 12:32:16.947177    8536 command_runner.go:130] ! I0610 12:08:14.648798       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 12:32:16.947219    8536 command_runner.go:130] ! I0610 12:08:14.648809       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 12:32:16.947219    8536 command_runner.go:130] ! I0610 12:08:14.648848       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 12:32:16.947273    8536 command_runner.go:130] ! I0610 12:08:14.656075       1 shared_informer.go:320] Caches are synced for HPA
	I0610 12:32:16.947273    8536 command_runner.go:130] ! I0610 12:08:14.656781       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0610 12:32:16.947273    8536 command_runner.go:130] ! I0610 12:08:14.657449       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:32:16.947342    8536 command_runner.go:130] ! I0610 12:08:14.657643       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:32:16.947342    8536 command_runner.go:130] ! I0610 12:08:14.658125       1 shared_informer.go:320] Caches are synced for expand
	I0610 12:32:16.947342    8536 command_runner.go:130] ! I0610 12:08:14.661079       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:32:16.947400    8536 command_runner.go:130] ! I0610 12:08:14.668926       1 shared_informer.go:320] Caches are synced for service account
	I0610 12:32:16.947400    8536 command_runner.go:130] ! I0610 12:08:14.675620       1 shared_informer.go:320] Caches are synced for TTL
	I0610 12:32:16.947440    8536 command_runner.go:130] ! I0610 12:08:14.680953       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300" podCIDRs=["10.244.0.0/24"]
	I0610 12:32:16.947440    8536 command_runner.go:130] ! I0610 12:08:14.687842       1 shared_informer.go:320] Caches are synced for deployment
	I0610 12:32:16.947502    8536 command_runner.go:130] ! I0610 12:08:14.751377       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0610 12:32:16.947502    8536 command_runner.go:130] ! I0610 12:08:14.754827       1 shared_informer.go:320] Caches are synced for crt configmap
	I0610 12:32:16.947502    8536 command_runner.go:130] ! I0610 12:08:14.795731       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:32:16.947557    8536 command_runner.go:130] ! I0610 12:08:14.803976       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 12:32:16.947557    8536 command_runner.go:130] ! I0610 12:08:14.807376       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0610 12:32:16.947557    8536 command_runner.go:130] ! I0610 12:08:14.807800       1 shared_informer.go:320] Caches are synced for cronjob
	I0610 12:32:16.947557    8536 command_runner.go:130] ! I0610 12:08:14.851108       1 shared_informer.go:320] Caches are synced for job
	I0610 12:32:16.947611    8536 command_runner.go:130] ! I0610 12:08:14.858915       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:16.947611    8536 command_runner.go:130] ! I0610 12:08:14.859692       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:32:16.947611    8536 command_runner.go:130] ! I0610 12:08:14.864873       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0610 12:32:16.947686    8536 command_runner.go:130] ! I0610 12:08:15.295934       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:16.947686    8536 command_runner.go:130] ! I0610 12:08:15.296041       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:32:16.947686    8536 command_runner.go:130] ! I0610 12:08:15.332772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:32:16.947726    8536 command_runner.go:130] ! I0610 12:08:15.887603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="329.520484ms"
	I0610 12:32:16.947726    8536 command_runner.go:130] ! I0610 12:08:16.024148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.478301ms"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.151441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.784808ms"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.151859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="288.402µs"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.577624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.03545ms"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.593339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.556101ms"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:16.593508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.3µs"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:30.535681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130µs"
	I0610 12:32:16.947774    8536 command_runner.go:130] ! I0610 12:08:30.566310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.4µs"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:16.947924    8536 command_runner.go:130] ! I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:32:16.948054    8536 command_runner.go:130] ! I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:32:16.948054    8536 command_runner.go:130] ! I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:32:16.948054    8536 command_runner.go:130] ! I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:16.948118    8536 command_runner.go:130] ! I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:32:16.948162    8536 command_runner.go:130] ! I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:32:16.948162    8536 command_runner.go:130] ! I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:32:16.948162    8536 command_runner.go:130] ! I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:32:16.948228    8536 command_runner.go:130] ! I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:32:16.948228    8536 command_runner.go:130] ! I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:32:16.948273    8536 command_runner.go:130] ! I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:32:16.948310    8536 command_runner.go:130] ! I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:32:16.948310    8536 command_runner.go:130] ! I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	I0610 12:32:16.948352    8536 command_runner.go:130] ! I0610 12:25:52.678571       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:32:16.948388    8536 command_runner.go:130] ! I0610 12:25:52.681612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:16.948421    8536 command_runner.go:130] ! I0610 12:25:52.698797       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m03" podCIDRs=["10.244.2.0/24"]
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:25:54.878967       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:26:13.380155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:27:44.944679       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:28:15.516170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.644756ms"
	I0610 12:32:16.948451    8536 command_runner.go:130] ! I0610 12:28:15.516815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.1µs"
	I0610 12:32:16.969018    8536 logs.go:123] Gathering logs for container status ...
	I0610 12:32:16.969018    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 12:32:17.043020    8536 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0610 12:32:17.043184    8536 command_runner.go:130] > b9550940a81ca       8c811b4aec35f                                                                                         13 seconds ago       Running             busybox                   1                   c4d124cebb3b3       busybox-fc5497c4f-z28tq
	I0610 12:32:17.043184    8536 command_runner.go:130] > 24f3f7e041f98       cbb01a7bd410d                                                                                         13 seconds ago       Running             coredns                   1                   241c4811748fa       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:17.043281    8536 command_runner.go:130] > e934ffe0f9032       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   2dd9b423841c9       storage-provisioner
	I0610 12:32:17.043318    8536 command_runner.go:130] > c3c4316beca64       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   0c19b39e15f6a       kindnet-29gbv
	I0610 12:32:17.043318    8536 command_runner.go:130] > cc9dbe4aa4005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   2dd9b423841c9       storage-provisioner
	I0610 12:32:17.043371    8536 command_runner.go:130] > 1de5fa0ef8384       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   06d997d7c306c       kube-proxy-nrpvt
	I0610 12:32:17.043406    8536 command_runner.go:130] > d7941126134f2       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   5c3da3b59b527       kube-apiserver-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > 877ee07c14997       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   b13c0058ce265       etcd-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > d90e72ef46704       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   8902dac03acbc       kube-scheduler-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > 3bee53d5fef91       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   f56cc8af37db0       kube-controller-manager-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > 91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	I0610 12:32:17.043430    8536 command_runner.go:130] > f2e39052db195       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	I0610 12:32:17.043430    8536 command_runner.go:130] > c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	I0610 12:32:17.043430    8536 command_runner.go:130] > afad8b05897e5       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	I0610 12:32:17.043430    8536 command_runner.go:130] > bd1a6cd987430       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	I0610 12:32:17.043430    8536 command_runner.go:130] > f1409bf44ff14       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	I0610 12:32:17.046097    8536 logs.go:123] Gathering logs for kubelet ...
	I0610 12:32:17.046139    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 12:32:17.080844    8536 command_runner.go:130] > Jun 10 12:30:48 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081485    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322075    1392 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:17.081570    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.322142    1392 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: I0610 12:30:49.324143    1392 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 kubelet[1392]: E0610 12:30:49.325228    1392 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:49 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078361    1448 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078445    1448 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: I0610 12:30:50.078696    1448 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 kubelet[1448]: E0610 12:30:50.078819    1448 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:50 multinode-813300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:53 multinode-813300 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021338    1528 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.021853    1528 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.022286    1528 server.go:927] "Client rotation is on, will bootstrap in background"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.024650    1528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.040752    1528 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.082883    1528 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.083180    1528 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085143    1528 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.085256    1528 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-813300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.086924    1528 topology_manager.go:138] "Creating topology manager with none policy"
	I0610 12:32:17.081602    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.087122    1528 container_manager_linux.go:301] "Creating device plugin manager"
	I0610 12:32:17.082192    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.088486    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:17.082192    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.090915    1528 kubelet.go:400] "Attempting to sync node with API server"
	I0610 12:32:17.082243    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091108    1528 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0610 12:32:17.082243    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.091402    1528 kubelet.go:312] "Adding apiserver pod source"
	I0610 12:32:17.082282    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.092259    1528 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0610 12:32:17.082282    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.097253    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.082351    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.097520    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.082392    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.099693    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.082443    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.099740    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.082484    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.099843    1528 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0610 12:32:17.082563    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.102710    1528 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0610 12:32:17.082563    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.103981    1528 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0610 12:32:17.082563    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.107194    1528 server.go:1264] "Started kubelet"
	I0610 12:32:17.082609    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.120692    1528 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0610 12:32:17.082655    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.122088    1528 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0610 12:32:17.082703    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.125028    1528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0610 12:32:17.082731    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.128857    1528 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0610 12:32:17.082731    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.132449    1528 server.go:455] "Adding debug handlers to kubelet server"
	I0610 12:32:17.082781    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.124281    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:17.082838    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.137444    1528 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0610 12:32:17.082838    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.139221    1528 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0610 12:32:17.082838    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.141909    1528 factory.go:221] Registration of the systemd container factory successfully
	I0610 12:32:17.082902    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147241    1528 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0610 12:32:17.082902    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.147375    1528 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0610 12:32:17.082969    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.144942    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="200ms"
	I0610 12:32:17.082969    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.143108    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.083064    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.154145    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.083064    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.179909    1528 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0610 12:32:17.083124    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180022    1528 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0610 12:32:17.083124    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.180086    1528 state_mem.go:36] "Initialized new in-memory state store"
	I0610 12:32:17.083124    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181162    1528 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0610 12:32:17.083124    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181233    1528 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0610 12:32:17.083193    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.181261    1528 policy_none.go:49] "None policy: Start"
	I0610 12:32:17.083193    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.192385    1528 reconciler.go:26] "Reconciler: start to sync state"
	I0610 12:32:17.083193    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193179    1528 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0610 12:32:17.083193    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193256    1528 state_mem.go:35] "Initializing new in-memory state store"
	I0610 12:32:17.083266    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.193830    1528 state_mem.go:75] "Updated machine memory state"
	I0610 12:32:17.083266    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.197194    1528 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0610 12:32:17.083266    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.204265    1528 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0610 12:32:17.083266    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.219894    1528 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0610 12:32:17.083361    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.226098    1528 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-813300\" not found"
	I0610 12:32:17.083361    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.226649    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0610 12:32:17.083361    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.230123    1528 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0610 12:32:17.083435    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231021    1528 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0610 12:32:17.083435    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.231133    1528 kubelet.go:2337] "Starting kubelet main sync loop"
	I0610 12:32:17.083435    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.231189    1528 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0610 12:32:17.083499    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.244084    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.083499    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: W0610 12:30:54.247037    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.083562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.247227    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.083562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.253607    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:17.083562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.255809    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:17.083562    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:17.083644    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:17.083702    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:17.083702    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:17.083814    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334683    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62db1c721951a36c62a6369a30c651a661eb2871f8363fa341ef8ad7b7080a07"
	I0610 12:32:17.083814    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.334742    1528 topology_manager.go:215] "Topology Admit Handler" podUID="180cf4cc399d604c28cc4df1442ebd5a" podNamespace="kube-system" podName="kube-apiserver-multinode-813300"
	I0610 12:32:17.083814    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.336338    1528 topology_manager.go:215] "Topology Admit Handler" podUID="37865ce1914dc04a4a0a25e98b80ce35" podNamespace="kube-system" podName="kube-controller-manager-multinode-813300"
	I0610 12:32:17.083941    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.338106    1528 topology_manager.go:215] "Topology Admit Handler" podUID="4d9c84710aef19c4449f4b7691d0af07" podNamespace="kube-system" podName="kube-scheduler-multinode-813300"
	I0610 12:32:17.083977    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340794    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7d28a97ba1c48cbe8edd3eab76f64cdcdebf920a03921644f63d12856b642f0"
	I0610 12:32:17.084043    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.340848    1528 topology_manager.go:215] "Topology Admit Handler" podUID="76e8893277ba7cea6624561880496e47" podNamespace="kube-system" podName="etcd-multinode-813300"
	I0610 12:32:17.084113    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.341927    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04d7b3d4fcc648cd6b447a383defba86200f1071acc892670457ebeebb52f22"
	I0610 12:32:17.084113    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.342208    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bc6043f7b92f091f4ceee7db3e11617072391c6e5303f4ecdafdb06d4b585a"
	I0610 12:32:17.084113    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.356667    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="400ms"
	I0610 12:32:17.084113    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.365771    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ae7aed00678050d16cc1436a741d75bc6696cf5eaebed8ae8b0cae97b4f12c"
	I0610 12:32:17.084229    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.380268    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b6aa9a0e1d1cbcee858808fc74f396cfba20777f2316093484920397e9b4ca"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397846    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-ca-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397877    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397922    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-flexvolume-dir\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397961    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-k8s-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.397979    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-kubeconfig\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398000    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-data\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398019    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/180cf4cc399d604c28cc4df1442ebd5a-k8s-certs\") pod \"kube-apiserver-multinode-813300\" (UID: \"180cf4cc399d604c28cc4df1442ebd5a\") " pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398038    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37865ce1914dc04a4a0a25e98b80ce35-ca-certs\") pod \"kube-controller-manager-multinode-813300\" (UID: \"37865ce1914dc04a4a0a25e98b80ce35\") " pod="kube-system/kube-controller-manager-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398055    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d9c84710aef19c4449f4b7691d0af07-kubeconfig\") pod \"kube-scheduler-multinode-813300\" (UID: \"4d9c84710aef19c4449f4b7691d0af07\") " pod="kube-system/kube-scheduler-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.398073    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/76e8893277ba7cea6624561880496e47-etcd-certs\") pod \"etcd-multinode-813300\" (UID: \"76e8893277ba7cea6624561880496e47\") " pod="kube-system/etcd-multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.400870    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffef928b24740a4440a1de8329cbd26462bc96c0ff48ed0b63603e8d2c2924d"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.416196    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689b8976cc0293bf6ae2ffaf7abbe0a59cfa7521907fd652e86da3912515d25d"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.442360    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10e49596de5e51f9986bebf2105f07084a083e5e8c2ab50684531210b032662"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.454932    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.456598    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.759421    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="800ms"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: I0610 12:30:54.858477    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:54 multinode-813300 kubelet[1528]: E0610 12:30:54.859580    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.205231    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.205310    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.248476    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.084297    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.249836    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-813300&limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085115    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.406658    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085115    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.406731    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085115    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.487592    1528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3da3b59b527b7aa8a8d5616cf847dcdafe435065f549d7c2b464322ff73b99"
	I0610 12:32:17.085213    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.561164    1528 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-813300?timeout=10s\": dial tcp 172.17.150.144:8443: connect: connection refused" interval="1.6s"
	I0610 12:32:17.085213    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: I0610 12:30:55.661352    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.085313    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.663943    1528 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.17.150.144:8443: connect: connection refused" node="multinode-813300"
	I0610 12:32:17.085313    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: W0610 12:30:55.751130    1528 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085388    8536 command_runner.go:130] > Jun 10 12:30:55 multinode-813300 kubelet[1528]: E0610 12:30:55.751205    1528 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.17.150.144:8443: connect: connection refused
	I0610 12:32:17.085465    8536 command_runner.go:130] > Jun 10 12:30:56 multinode-813300 kubelet[1528]: E0610 12:30:56.215699    1528 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.17.150.144:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-813300.17d7a4805e219e54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-813300,UID:multinode-813300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-813300,},FirstTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,LastTimestamp:2024-06-10 12:30:54.107164244 +0000 UTC m=+0.198287063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
13300,}"
	I0610 12:32:17.085465    8536 command_runner.go:130] > Jun 10 12:30:57 multinode-813300 kubelet[1528]: I0610 12:30:57.265569    1528 kubelet_node_status.go:73] "Attempting to register node" node="multinode-813300"
	I0610 12:32:17.085465    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636898    1528 kubelet_node_status.go:112] "Node was previously registered" node="multinode-813300"
	I0610 12:32:17.085465    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.636993    1528 kubelet_node_status.go:76] "Successfully registered node" node="multinode-813300"
	I0610 12:32:17.085545    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.638685    1528 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0610 12:32:17.085545    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639257    1528 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:30:59 multinode-813300 kubelet[1528]: I0610 12:30:59.639985    1528 setters.go:580] "Node became not ready" node="multinode-813300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-10T12:30:59Z","lastTransitionTime":"2024-06-10T12:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.103240    1528 apiserver.go:52] "Watching apiserver"
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109200    1528 topology_manager.go:215] "Topology Admit Handler" podUID="40bf0aff-00b2-40c7-bed7-52b8cadbc3a1" podNamespace="kube-system" podName="kube-proxy-nrpvt"
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109472    1528 topology_manager.go:215] "Topology Admit Handler" podUID="aad8124e-6c05-4719-9adb-edc11b3cce42" podNamespace="kube-system" podName="kindnet-29gbv"
	I0610 12:32:17.085698    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109721    1528 topology_manager.go:215] "Topology Admit Handler" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kbhvv"
	I0610 12:32:17.085818    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.109954    1528 topology_manager.go:215] "Topology Admit Handler" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e" podNamespace="kube-system" podName="storage-provisioner"
	I0610 12:32:17.085818    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110077    1528 topology_manager.go:215] "Topology Admit Handler" podUID="3191c71a-8c87-4390-8232-8653f494d1f0" podNamespace="default" podName="busybox-fc5497c4f-z28tq"
	I0610 12:32:17.085911    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.110308    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.085911    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.110641    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-813300" podUID="f824b391-b3d2-49ec-ba7d-863cb2150f81"
	I0610 12:32:17.085911    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.111896    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:17.085996    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.115871    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.085996    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.147565    1528 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0610 12:32:17.086098    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.155423    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-813300"
	I0610 12:32:17.086098    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160314    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6dfedc3-d6ff-412c-8a13-40a493c4199e-tmp\") pod \"storage-provisioner\" (UID: \"f6dfedc3-d6ff-412c-8a13-40a493c4199e\") " pod="kube-system/storage-provisioner"
	I0610 12:32:17.086177    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160428    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-cni-cfg\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:17.086177    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.160790    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-xtables-lock\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:17.086255    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161224    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-xtables-lock\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:17.086255    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.161359    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40bf0aff-00b2-40c7-bed7-52b8cadbc3a1-lib-modules\") pod \"kube-proxy-nrpvt\" (UID: \"40bf0aff-00b2-40c7-bed7-52b8cadbc3a1\") " pod="kube-system/kube-proxy-nrpvt"
	I0610 12:32:17.086333    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162089    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.086333    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.162182    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.662151031 +0000 UTC m=+6.753273950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.086414    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.162238    1528 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aad8124e-6c05-4719-9adb-edc11b3cce42-lib-modules\") pod \"kindnet-29gbv\" (UID: \"aad8124e-6c05-4719-9adb-edc11b3cce42\") " pod="kube-system/kindnet-29gbv"
	I0610 12:32:17.086414    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.175000    1528 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-813300"
	I0610 12:32:17.086414    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.186991    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086491    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187290    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086568    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.187519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:00.687498638 +0000 UTC m=+6.778621457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086568    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.246331    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f80d01e953cc664fc05c397fdad000" path="/var/lib/kubelet/pods/93f80d01e953cc664fc05c397fdad000/volumes"
	I0610 12:32:17.086568    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.248399    1528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa7bd9cfb361baaed8d7d5729a6c77c" path="/var/lib/kubelet/pods/baa7bd9cfb361baaed8d7d5729a6c77c/volumes"
	I0610 12:32:17.086647    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.316426    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-813300" podStartSLOduration=0.316407314 podStartE2EDuration="316.407314ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.316147208 +0000 UTC m=+6.407270027" watchObservedRunningTime="2024-06-10 12:31:00.316407314 +0000 UTC m=+6.407530233"
	I0610 12:32:17.086722    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.439081    1528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-813300" podStartSLOduration=0.439018164 podStartE2EDuration="439.018164ms" podCreationTimestamp="2024-06-10 12:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 12:31:00.409703778 +0000 UTC m=+6.500826597" watchObservedRunningTime="2024-06-10 12:31:00.439018164 +0000 UTC m=+6.530141083"
	I0610 12:32:17.086722    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: I0610 12:31:00.631684    1528 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-813300" podUID="e48af956-8533-4b8e-be5d-0834484cbffa"
	I0610 12:32:17.086799    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667882    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.086799    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.667966    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.667947638 +0000 UTC m=+7.759070557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.086878    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769226    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086878    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769334    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086955    8536 command_runner.go:130] > Jun 10 12:31:00 multinode-813300 kubelet[1528]: E0610 12:31:00.769428    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:01.769408565 +0000 UTC m=+7.860531384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.086955    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.231939    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.087032    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.679952    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.087032    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.680142    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.680120563 +0000 UTC m=+9.771243482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.087110    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.781772    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087110    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782050    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087186    8536 command_runner.go:130] > Jun 10 12:31:01 multinode-813300 kubelet[1528]: E0610 12:31:01.782132    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:03.7821123 +0000 UTC m=+9.873235219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087277    8536 command_runner.go:130] > Jun 10 12:31:02 multinode-813300 kubelet[1528]: E0610 12:31:02.234039    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.087277    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.232296    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.087353    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.701884    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.087353    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.702058    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.702037863 +0000 UTC m=+13.793160782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.087433    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802160    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087433    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802233    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087517    8536 command_runner.go:130] > Jun 10 12:31:03 multinode-813300 kubelet[1528]: E0610 12:31:03.802292    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:07.802272966 +0000 UTC m=+13.893395785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087517    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.207349    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.087611    8536 command_runner.go:130] > Jun 10 12:31:04 multinode-813300 kubelet[1528]: E0610 12:31:04.238069    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.087611    8536 command_runner.go:130] > Jun 10 12:31:05 multinode-813300 kubelet[1528]: E0610 12:31:05.232753    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.087707    8536 command_runner.go:130] > Jun 10 12:31:06 multinode-813300 kubelet[1528]: E0610 12:31:06.233804    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.087707    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.231988    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.087805    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736592    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.087805    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.736825    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.736801176 +0000 UTC m=+21.827923995 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.087887    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837037    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087887    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837146    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.087968    8536 command_runner.go:130] > Jun 10 12:31:07 multinode-813300 kubelet[1528]: E0610 12:31:07.837219    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:15.837199504 +0000 UTC m=+21.928322423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.088003    8536 command_runner.go:130] > Jun 10 12:31:08 multinode-813300 kubelet[1528]: E0610 12:31:08.232310    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.208416    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:09 multinode-813300 kubelet[1528]: E0610 12:31:09.231620    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:10 multinode-813300 kubelet[1528]: E0610 12:31:10.233882    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:11 multinode-813300 kubelet[1528]: E0610 12:31:11.232126    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:12 multinode-813300 kubelet[1528]: E0610 12:31:12.233695    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:13 multinode-813300 kubelet[1528]: E0610 12:31:13.231660    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.210433    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:14 multinode-813300 kubelet[1528]: E0610 12:31:14.234870    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.232790    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816637    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.816990    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.816931565 +0000 UTC m=+37.908054384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918429    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918619    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:15 multinode-813300 kubelet[1528]: E0610 12:31:15.918694    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:31:31.918675278 +0000 UTC m=+38.009798097 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:16 multinode-813300 kubelet[1528]: E0610 12:31:16.234954    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088052    8536 command_runner.go:130] > Jun 10 12:31:17 multinode-813300 kubelet[1528]: E0610 12:31:17.231668    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088632    8536 command_runner.go:130] > Jun 10 12:31:18 multinode-813300 kubelet[1528]: E0610 12:31:18.232656    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088632    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.214153    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088632    8536 command_runner.go:130] > Jun 10 12:31:19 multinode-813300 kubelet[1528]: E0610 12:31:19.231639    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088632    8536 command_runner.go:130] > Jun 10 12:31:20 multinode-813300 kubelet[1528]: E0610 12:31:20.234429    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088788    8536 command_runner.go:130] > Jun 10 12:31:21 multinode-813300 kubelet[1528]: E0610 12:31:21.232080    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:22 multinode-813300 kubelet[1528]: E0610 12:31:22.232638    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:23 multinode-813300 kubelet[1528]: E0610 12:31:23.233105    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.216593    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:24 multinode-813300 kubelet[1528]: E0610 12:31:24.233280    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:25 multinode-813300 kubelet[1528]: E0610 12:31:25.232513    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:26 multinode-813300 kubelet[1528]: E0610 12:31:26.232337    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:27 multinode-813300 kubelet[1528]: E0610 12:31:27.233152    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:28 multinode-813300 kubelet[1528]: E0610 12:31:28.234103    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.218816    1528 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:29 multinode-813300 kubelet[1528]: E0610 12:31:29.232070    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:30 multinode-813300 kubelet[1528]: E0610 12:31:30.231766    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.231673    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884791    1528 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0610 12:32:17.088824    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.884975    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume podName:c9da505f-fd4e-4c29-ad69-3b5ac1e51e98 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.884956587 +0000 UTC m=+69.976079506 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c9da505f-fd4e-4c29-ad69-3b5ac1e51e98-config-volume") pod "coredns-7db6d8ff4d-kbhvv" (UID: "c9da505f-fd4e-4c29-ad69-3b5ac1e51e98") : object "kube-system"/"coredns" not registered
	I0610 12:32:17.089417    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985181    1528 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.089417    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985216    1528 projected.go:200] Error preparing data for projected volume kube-api-access-tkl2j for pod default/busybox-fc5497c4f-z28tq: object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.089417    8536 command_runner.go:130] > Jun 10 12:31:31 multinode-813300 kubelet[1528]: E0610 12:31:31.985519    1528 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j podName:3191c71a-8c87-4390-8232-8653f494d1f0 nodeName:}" failed. No retries permitted until 2024-06-10 12:32:03.98525598 +0000 UTC m=+70.076378799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tkl2j" (UniqueName: "kubernetes.io/projected/3191c71a-8c87-4390-8232-8653f494d1f0-kube-api-access-tkl2j") pod "busybox-fc5497c4f-z28tq" (UID: "3191c71a-8c87-4390-8232-8653f494d1f0") : object "default"/"kube-root-ca.crt" not registered
	I0610 12:32:17.089417    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.232018    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-z28tq" podUID="3191c71a-8c87-4390-8232-8653f494d1f0"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.476305    1528 scope.go:117] "RemoveContainer" containerID="d32ce22e31b06bacb7530f3513c1f864d77685269868404ad7c71a4f15d91e41"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: I0610 12:31:32.477175    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.477659    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f6dfedc3-d6ff-412c-8a13-40a493c4199e)\"" pod="kube-system/storage-provisioner" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:33 multinode-813300 kubelet[1528]: E0610 12:31:33.232631    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:47 multinode-813300 kubelet[1528]: I0610 12:31:47.231895    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.214930    1528 scope.go:117] "RemoveContainer" containerID="34b9299d74e382eadb8e7df1029506efc87e283ac8b38024d9524b8bb815f705"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: E0610 12:31:54.266189    1528 iptables.go:577] "Could not set up iptables canary" err=<
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0610 12:32:17.089568    8536 command_runner.go:130] > Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.275663    1528 scope.go:117] "RemoveContainer" containerID="ba52603f8387590319a4d5a9511265065e2f90bff6628bec2f622754e034c70a"
	I0610 12:32:19.643034    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:32:19.643034    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:19.643034    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:19.643034    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:19.650110    8536 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 12:32:19.650110    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:19.650110    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:19.650110    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:19 GMT
	I0610 12:32:19.650110    8536 round_trippers.go:580]     Audit-Id: ffda0f58-706e-4237-be51-4f259c9a61a6
	I0610 12:32:19.650110    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:19.650110    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:19.650110    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:19.652085    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1841"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1827","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0610 12:32:19.656195    8536 system_pods.go:59] 12 kube-system pods found
	I0610 12:32:19.656195    8536 system_pods.go:61] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "etcd-multinode-813300" [f9259e5e-61e9-4252-b7c6-de5d499eb9c1] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kindnet-2pc4j" [966ce4c1-e9ee-48d6-9e52-98143fa03e67] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kindnet-r4nfq" [dceb3d20-8d04-4408-927f-1c195558dd19] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-apiserver-multinode-813300" [2cf29b2c-a2a9-46ec-bbc8-fe884e97df06] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-proxy-rx2b2" [ce59a99b-a561-4598-9399-147f748433a2] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-proxy-vw56h" [f3f9e738-89d2-4776-a212-a1ca28952f7c] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:32:19.656195    8536 system_pods.go:61] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:32:19.656195    8536 system_pods.go:74] duration metric: took 3.8088891s to wait for pod list to return data ...
	I0610 12:32:19.656195    8536 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:32:19.656428    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/default/serviceaccounts
	I0610 12:32:19.656428    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:19.656428    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:19.656428    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:19.659009    8536 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 12:32:19.660013    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Audit-Id: 87f7f8e9-1525-4e4f-affd-e17bc21a5585
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:19.660013    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:19.660013    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Content-Length: 262
	I0610 12:32:19.660013    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:19 GMT
	I0610 12:32:19.660013    8536 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1841"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2033967b-ff48-4641-b518-45705bf023c6","resourceVersion":"336","creationTimestamp":"2024-06-10T12:08:15Z"}}]}
	I0610 12:32:19.660013    8536 default_sa.go:45] found service account: "default"
	I0610 12:32:19.660013    8536 default_sa.go:55] duration metric: took 3.8179ms for default service account to be created ...
	I0610 12:32:19.660013    8536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:32:19.660013    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/namespaces/kube-system/pods
	I0610 12:32:19.660013    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:19.660013    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:19.660013    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:19.666359    8536 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 12:32:19.666359    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:19.666359    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:19 GMT
	I0610 12:32:19.666359    8536 round_trippers.go:580]     Audit-Id: f3197283-3d72-4030-83e3-14ba38baaa31
	I0610 12:32:19.666541    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:19.666541    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:19.666541    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:19.666541    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:19.668047    8536 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1841"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-kbhvv","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c9da505f-fd4e-4c29-ad69-3b5ac1e51e98","resourceVersion":"1827","creationTimestamp":"2024-06-10T12:08:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"651691d0-e491-4bc5-a199-6caa9e319acd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-10T12:08:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651691d0-e491-4bc5-a199-6caa9e319acd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0610 12:32:19.672258    8536 system_pods.go:86] 12 kube-system pods found
	I0610 12:32:19.672258    8536 system_pods.go:89] "coredns-7db6d8ff4d-kbhvv" [c9da505f-fd4e-4c29-ad69-3b5ac1e51e98] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "etcd-multinode-813300" [f9259e5e-61e9-4252-b7c6-de5d499eb9c1] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kindnet-29gbv" [aad8124e-6c05-4719-9adb-edc11b3cce42] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kindnet-2pc4j" [966ce4c1-e9ee-48d6-9e52-98143fa03e67] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kindnet-r4nfq" [dceb3d20-8d04-4408-927f-1c195558dd19] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-apiserver-multinode-813300" [2cf29b2c-a2a9-46ec-bbc8-fe884e97df06] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-controller-manager-multinode-813300" [879be9d7-8b2b-4f58-ba70-61d4e9f3441e] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-proxy-nrpvt" [40bf0aff-00b2-40c7-bed7-52b8cadbc3a1] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-proxy-rx2b2" [ce59a99b-a561-4598-9399-147f748433a2] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-proxy-vw56h" [f3f9e738-89d2-4776-a212-a1ca28952f7c] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "kube-scheduler-multinode-813300" [bd85735c-2f0d-48ab-bb0e-83f471c3af0a] Running
	I0610 12:32:19.672258    8536 system_pods.go:89] "storage-provisioner" [f6dfedc3-d6ff-412c-8a13-40a493c4199e] Running
	I0610 12:32:19.672874    8536 system_pods.go:126] duration metric: took 12.2449ms to wait for k8s-apps to be running ...
	I0610 12:32:19.672874    8536 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:32:19.683797    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:32:19.712741    8536 system_svc.go:56] duration metric: took 39.8671ms WaitForService to wait for kubelet
	I0610 12:32:19.712824    8536 kubeadm.go:576] duration metric: took 1m15.1914955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:32:19.712824    8536 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:32:19.713019    8536 round_trippers.go:463] GET https://172.17.150.144:8443/api/v1/nodes
	I0610 12:32:19.713087    8536 round_trippers.go:469] Request Headers:
	I0610 12:32:19.713087    8536 round_trippers.go:473]     Accept: application/json, */*
	I0610 12:32:19.713087    8536 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0610 12:32:19.717280    8536 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 12:32:19.717280    8536 round_trippers.go:577] Response Headers:
	I0610 12:32:19.717280    8536 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8bdafaf0-730f-4743-a8aa-9a4d235782cd
	I0610 12:32:19.718122    8536 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a83efe2e-8c2e-46ea-a143-72a1935ae63a
	I0610 12:32:19.718122    8536 round_trippers.go:580]     Date: Mon, 10 Jun 2024 12:32:19 GMT
	I0610 12:32:19.718122    8536 round_trippers.go:580]     Audit-Id: d3e3a9ae-bd4b-4b22-8e97-6e4006f75bc3
	I0610 12:32:19.718122    8536 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 12:32:19.718122    8536 round_trippers.go:580]     Content-Type: application/json
	I0610 12:32:19.718615    8536 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1841"},"items":[{"metadata":{"name":"multinode-813300","uid":"aab38eff-c0d7-48fa-9f38-bfa0011bf682","resourceVersion":"1803","creationTimestamp":"2024-06-10T12:07:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-813300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-813300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T12_08_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16273 chars]
	I0610 12:32:19.719336    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:32:19.719336    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:32:19.719336    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:32:19.719336    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:32:19.719336    8536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:32:19.719336    8536 node_conditions.go:123] node cpu capacity is 2
	I0610 12:32:19.719336    8536 node_conditions.go:105] duration metric: took 6.5117ms to run NodePressure ...
	I0610 12:32:19.719336    8536 start.go:240] waiting for startup goroutines ...
	I0610 12:32:19.719336    8536 start.go:245] waiting for cluster config update ...
	I0610 12:32:19.719336    8536 start.go:254] writing updated cluster config ...
	I0610 12:32:19.726596    8536 out.go:177] 
	I0610 12:32:19.729829    8536 config.go:182] Loaded profile config "ha-368100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:32:19.740405    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:32:19.740405    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:32:19.746414    8536 out.go:177] * Starting "multinode-813300-m02" worker node in "multinode-813300" cluster
	I0610 12:32:19.748409    8536 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 12:32:19.748409    8536 cache.go:56] Caching tarball of preloaded images
	I0610 12:32:19.748409    8536 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 12:32:19.749413    8536 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 12:32:19.749413    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:32:19.751404    8536 start.go:360] acquireMachinesLock for multinode-813300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:32:19.751404    8536 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-813300-m02"
	I0610 12:32:19.751404    8536 start.go:96] Skipping create...Using existing machine configuration
	I0610 12:32:19.751404    8536 fix.go:54] fixHost starting: m02
	I0610 12:32:19.752518    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:22.145001    8536 main.go:141] libmachine: [stdout =====>] : Off
	
	I0610 12:32:22.145001    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:22.145001    8536 fix.go:112] recreateIfNeeded on multinode-813300-m02: state=Stopped err=<nil>
	W0610 12:32:22.145478    8536 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 12:32:22.150442    8536 out.go:177] * Restarting existing hyperv VM for "multinode-813300-m02" ...
	I0610 12:32:22.157329    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-813300-m02
	I0610 12:32:25.569690    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:25.570595    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:25.570595    8536 main.go:141] libmachine: Waiting for host to start...
	I0610 12:32:25.570666    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:28.042542    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:28.042542    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:28.042542    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:30.783114    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:30.783598    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:31.795751    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:34.187966    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:34.187966    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:34.188800    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:36.979297    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:36.979297    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:37.994728    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:40.374953    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:40.375046    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:40.375046    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:43.144200    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:43.144200    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:44.155496    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:46.557278    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:46.557660    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:46.557727    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:49.306332    8536 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:32:49.306332    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:50.318623    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:52.761527    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:52.761527    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:52.761527    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:32:55.546956    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:32:55.546956    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:55.550287    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:32:57.879214    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:32:57.879214    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:32:57.880290    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:00.654758    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:00.655199    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:00.655199    8536 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300\config.json ...
	I0610 12:33:00.658245    8536 machine.go:94] provisionDockerMachine start ...
	I0610 12:33:00.658321    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:02.977040    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:02.977040    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:02.977040    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:05.730578    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:05.730578    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:05.736885    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:05.737482    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:05.737482    8536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:33:05.878041    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:33:05.878097    8536 buildroot.go:166] provisioning hostname "multinode-813300-m02"
	I0610 12:33:05.878153    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:08.230250    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:08.230250    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:08.230250    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:11.059855    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:11.060105    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:11.065817    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:11.066491    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:11.066491    8536 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-813300-m02 && echo "multinode-813300-m02" | sudo tee /etc/hostname
	I0610 12:33:11.233601    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-813300-m02
	
	I0610 12:33:11.233601    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:13.602050    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:13.602050    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:13.602050    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:16.449433    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:16.450208    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:16.455912    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:16.456589    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:16.456589    8536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-813300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-813300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-813300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:33:16.608087    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:33:16.608087    8536 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:33:16.608625    8536 buildroot.go:174] setting up certificates
	I0610 12:33:16.608625    8536 provision.go:84] configureAuth start
	I0610 12:33:16.608711    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:18.978716    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:18.979390    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:18.979448    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:21.793174    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:21.793367    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:21.793456    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:24.178467    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:24.178467    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:24.178828    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:26.969751    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:26.969751    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:26.969751    8536 provision.go:143] copyHostCerts
	I0610 12:33:26.969905    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0610 12:33:26.969905    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:33:26.969905    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:33:26.970807    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:33:26.971863    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0610 12:33:26.972385    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:33:26.972385    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:33:26.972758    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:33:26.973731    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0610 12:33:26.973731    8536 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:33:26.973731    8536 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:33:26.974400    8536 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:33:26.975104    8536 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-813300-m02 san=[127.0.0.1 172.17.144.123 localhost minikube multinode-813300-m02]
	I0610 12:33:27.303350    8536 provision.go:177] copyRemoteCerts
	I0610 12:33:27.315963    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:33:27.315963    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:29.645541    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:29.645614    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:29.645614    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:32.416141    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:32.416338    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:32.416338    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:33:32.525224    8536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2092191s)
	I0610 12:33:32.525224    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0610 12:33:32.525224    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:33:32.575432    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0610 12:33:32.575996    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 12:33:32.631616    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0610 12:33:32.632313    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 12:33:32.686553    8536 provision.go:87] duration metric: took 16.0777996s to configureAuth
	I0610 12:33:32.686553    8536 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:33:32.687351    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:33:32.687483    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:34.999631    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:34.999631    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:34.999631    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:37.730266    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:37.730266    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:37.735498    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:37.735736    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:37.735736    8536 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:33:37.866123    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:33:37.866123    8536 buildroot.go:70] root file system type: tmpfs
	I0610 12:33:37.866123    8536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:33:37.866656    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:40.179815    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:40.179815    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:40.179815    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:42.997911    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:42.997911    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:43.003673    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:43.003673    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:43.004229    8536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.150.144"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:33:43.180023    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.150.144
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:33:43.180113    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:45.547268    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:45.547268    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:45.548022    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:48.274426    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:48.274426    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:48.280050    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:33:48.280110    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:33:48.280110    8536 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:33:50.776530    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:33:50.776530    8536 machine.go:97] duration metric: took 50.1178082s to provisionDockerMachine
	I0610 12:33:50.776530    8536 start.go:293] postStartSetup for "multinode-813300-m02" (driver="hyperv")
	I0610 12:33:50.776530    8536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:33:50.789531    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:33:50.789531    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:53.104831    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:53.105368    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:53.105368    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:33:55.919870    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:33:55.919870    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:55.921188    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:33:56.041139    8536 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2515031s)
	I0610 12:33:56.053185    8536 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:33:56.061684    8536 command_runner.go:130] > NAME=Buildroot
	I0610 12:33:56.062022    8536 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 12:33:56.062022    8536 command_runner.go:130] > ID=buildroot
	I0610 12:33:56.062022    8536 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 12:33:56.062022    8536 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 12:33:56.062022    8536 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:33:56.062142    8536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:33:56.062410    8536 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:33:56.063328    8536 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:33:56.063422    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /etc/ssl/certs/75482.pem
	I0610 12:33:56.077388    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:33:56.100559    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:33:56.154264    8536 start.go:296] duration metric: took 5.3776908s for postStartSetup
	I0610 12:33:56.154361    8536 fix.go:56] duration metric: took 1m36.4021859s for fixHost
	I0610 12:33:56.154361    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:33:58.515535    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:33:58.515535    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:33:58.515535    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:01.349664    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:01.350032    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:01.356578    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:34:01.357362    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:34:01.357362    8536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 12:34:01.498897    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718022841.497997726
	
	I0610 12:34:01.498897    8536 fix.go:216] guest clock: 1718022841.497997726
	I0610 12:34:01.498897    8536 fix.go:229] Guest: 2024-06-10 12:34:01.497997726 +0000 UTC Remote: 2024-06-10 12:33:56.1543615 +0000 UTC m=+317.671170201 (delta=5.343636226s)
	I0610 12:34:01.498988    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:03.837377    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:03.837377    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:03.837941    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:06.688872    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:06.688872    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:06.695433    8536 main.go:141] libmachine: Using SSH client type: native
	I0610 12:34:06.695433    8536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.144.123 22 <nil> <nil>}
	I0610 12:34:06.695433    8536 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718022841
	I0610 12:34:06.846091    8536 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:34:01 UTC 2024
	
	I0610 12:34:06.846091    8536 fix.go:236] clock set: Mon Jun 10 12:34:01 UTC 2024
	 (err=<nil>)
	I0610 12:34:06.846091    8536 start.go:83] releasing machines lock for "multinode-813300-m02", held for 1m47.0938302s
	I0610 12:34:06.847138    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:09.184866    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:09.184866    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:09.184992    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:12.023414    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:12.023414    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:12.026375    8536 out.go:177] * Found network options:
	I0610 12:34:12.029510    8536 out.go:177]   - NO_PROXY=172.17.150.144
	W0610 12:34:12.032010    8536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:34:12.038192    8536 out.go:177]   - NO_PROXY=172.17.150.144
	W0610 12:34:12.040219    8536 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 12:34:12.042464    8536 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 12:34:12.044408    8536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:34:12.044408    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:12.056541    8536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 12:34:12.056541    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:14.462480    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:14.462480    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:14.462480    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:14.468962    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:14.468962    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:14.468962    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:17.381494    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:17.381561    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:17.381901    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:34:17.425372    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:17.425372    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:17.425788    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:34:17.486772    8536 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0610 12:34:17.487116    8536 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.4305314s)
	W0610 12:34:17.487116    8536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:34:17.499691    8536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:34:17.567104    8536 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 12:34:17.567104    8536 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5226517s)
	I0610 12:34:17.567104    8536 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 12:34:17.567104    8536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:34:17.567104    8536 start.go:494] detecting cgroup driver to use...
	I0610 12:34:17.567104    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:34:17.608087    8536 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 12:34:17.621002    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 12:34:17.663612    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:34:17.691254    8536 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:34:17.702818    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:34:17.745521    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:34:17.778673    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:34:17.811125    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:34:17.847693    8536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:34:17.883755    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:34:17.919056    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:34:17.954882    8536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:34:17.988734    8536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:34:18.006989    8536 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 12:34:18.020120    8536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:34:18.052391    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:18.284080    8536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:34:18.319139    8536 start.go:494] detecting cgroup driver to use...
	I0610 12:34:18.332130    8536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:34:18.364311    8536 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 12:34:18.364311    8536 command_runner.go:130] > [Unit]
	I0610 12:34:18.364311    8536 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 12:34:18.364311    8536 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 12:34:18.364311    8536 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 12:34:18.364311    8536 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 12:34:18.364311    8536 command_runner.go:130] > StartLimitBurst=3
	I0610 12:34:18.364311    8536 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 12:34:18.364311    8536 command_runner.go:130] > [Service]
	I0610 12:34:18.364311    8536 command_runner.go:130] > Type=notify
	I0610 12:34:18.364311    8536 command_runner.go:130] > Restart=on-failure
	I0610 12:34:18.364311    8536 command_runner.go:130] > Environment=NO_PROXY=172.17.150.144
	I0610 12:34:18.364311    8536 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 12:34:18.364311    8536 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 12:34:18.364311    8536 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 12:34:18.364311    8536 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 12:34:18.364311    8536 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 12:34:18.364311    8536 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 12:34:18.364311    8536 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 12:34:18.364311    8536 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 12:34:18.364311    8536 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 12:34:18.364311    8536 command_runner.go:130] > ExecStart=
	I0610 12:34:18.364311    8536 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0610 12:34:18.364311    8536 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 12:34:18.364311    8536 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 12:34:18.364311    8536 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 12:34:18.364311    8536 command_runner.go:130] > LimitNOFILE=infinity
	I0610 12:34:18.364311    8536 command_runner.go:130] > LimitNPROC=infinity
	I0610 12:34:18.364311    8536 command_runner.go:130] > LimitCORE=infinity
	I0610 12:34:18.364311    8536 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 12:34:18.364311    8536 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 12:34:18.364311    8536 command_runner.go:130] > TasksMax=infinity
	I0610 12:34:18.364311    8536 command_runner.go:130] > TimeoutStartSec=0
	I0610 12:34:18.364311    8536 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 12:34:18.364900    8536 command_runner.go:130] > Delegate=yes
	I0610 12:34:18.364931    8536 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 12:34:18.364931    8536 command_runner.go:130] > KillMode=process
	I0610 12:34:18.364931    8536 command_runner.go:130] > [Install]
	I0610 12:34:18.364931    8536 command_runner.go:130] > WantedBy=multi-user.target
	I0610 12:34:18.382511    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:34:18.417650    8536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:34:18.473526    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:34:18.515086    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:34:18.555200    8536 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:34:18.625294    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:34:18.655900    8536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:34:18.698130    8536 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 12:34:18.714342    8536 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:34:18.722125    8536 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 12:34:18.737268    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:34:18.758142    8536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:34:18.812211    8536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:34:19.023731    8536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:34:19.213536    8536 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:34:19.213536    8536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:34:19.261285    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:19.475193    8536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:34:22.127243    8536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.651951s)
	I0610 12:34:22.142001    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 12:34:22.181718    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:34:22.221910    8536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 12:34:22.451618    8536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 12:34:22.670927    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:22.914816    8536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 12:34:22.962787    8536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 12:34:23.005628    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:23.236422    8536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 12:34:23.373390    8536 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 12:34:23.389305    8536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 12:34:23.397858    8536 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 12:34:23.397996    8536 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 12:34:23.397996    8536 command_runner.go:130] > Device: 0,22	Inode: 853         Links: 1
	I0610 12:34:23.397996    8536 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 12:34:23.397996    8536 command_runner.go:130] > Access: 2024-06-10 12:34:23.267625237 +0000
	I0610 12:34:23.397996    8536 command_runner.go:130] > Modify: 2024-06-10 12:34:23.267625237 +0000
	I0610 12:34:23.397996    8536 command_runner.go:130] > Change: 2024-06-10 12:34:23.275625184 +0000
	I0610 12:34:23.397996    8536 command_runner.go:130] >  Birth: -
	I0610 12:34:23.397996    8536 start.go:562] Will wait 60s for crictl version
	I0610 12:34:23.411009    8536 ssh_runner.go:195] Run: which crictl
	I0610 12:34:23.417900    8536 command_runner.go:130] > /usr/bin/crictl
	I0610 12:34:23.428590    8536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:34:23.500952    8536 command_runner.go:130] > Version:  0.1.0
	I0610 12:34:23.501055    8536 command_runner.go:130] > RuntimeName:  docker
	I0610 12:34:23.501055    8536 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 12:34:23.501055    8536 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 12:34:23.501135    8536 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 12:34:23.511418    8536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:34:23.555358    8536 command_runner.go:130] > 26.1.4
	I0610 12:34:23.567022    8536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 12:34:23.602503    8536 command_runner.go:130] > 26.1.4
	I0610 12:34:23.607275    8536 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 12:34:23.610213    8536 out.go:177]   - env NO_PROXY=172.17.150.144
	I0610 12:34:23.612242    8536 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0610 12:34:23.616240    8536 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0610 12:34:23.616240    8536 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0610 12:34:23.616240    8536 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0610 12:34:23.616240    8536 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:5c:49:25 Flags:up|broadcast|multicast|running}
	I0610 12:34:23.619242    8536 ip.go:210] interface addr: fe80::76a0:4644:5d9:ba33/64
	I0610 12:34:23.619242    8536 ip.go:210] interface addr: 172.17.144.1/20
	I0610 12:34:23.630197    8536 ssh_runner.go:195] Run: grep 172.17.144.1	host.minikube.internal$ /etc/hosts
	I0610 12:34:23.639089    8536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:34:23.663035    8536 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:34:23.667572    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:34:23.668475    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:34:25.989998    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:25.990332    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:25.990332    8536 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:34:25.991117    8536 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-813300 for IP: 172.17.144.123
	I0610 12:34:25.991117    8536 certs.go:194] generating shared ca certs ...
	I0610 12:34:25.991249    8536 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:34:25.991946    8536 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0610 12:34:25.992378    8536 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0610 12:34:25.992568    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 12:34:25.992568    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0610 12:34:25.992568    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 12:34:25.993109    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 12:34:25.993659    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem (1338 bytes)
	W0610 12:34:25.993988    8536 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548_empty.pem, impossibly tiny 0 bytes
	I0610 12:34:25.994091    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0610 12:34:25.994434    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0610 12:34:25.994776    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0610 12:34:25.995087    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0610 12:34:25.995687    8536 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem (1708 bytes)
	I0610 12:34:25.995813    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> /usr/share/ca-certificates/75482.pem
	I0610 12:34:25.996099    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:25.996334    8536 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem -> /usr/share/ca-certificates/7548.pem
	I0610 12:34:25.996611    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 12:34:26.061140    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 12:34:26.114878    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 12:34:26.165135    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 12:34:26.221217    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /usr/share/ca-certificates/75482.pem (1708 bytes)
	I0610 12:34:26.272772    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 12:34:26.325499    8536 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\7548.pem --> /usr/share/ca-certificates/7548.pem (1338 bytes)
	I0610 12:34:26.389280    8536 ssh_runner.go:195] Run: openssl version
	I0610 12:34:26.399370    8536 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 12:34:26.411296    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75482.pem && ln -fs /usr/share/ca-certificates/75482.pem /etc/ssl/certs/75482.pem"
	I0610 12:34:26.446983    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75482.pem
	I0610 12:34:26.453896    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:34:26.454076    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:41 /usr/share/ca-certificates/75482.pem
	I0610 12:34:26.465350    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75482.pem
	I0610 12:34:26.473541    8536 command_runner.go:130] > 3ec20f2e
	I0610 12:34:26.484158    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75482.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 12:34:26.521022    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 12:34:26.556900    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:26.565030    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:26.565030    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:26.575611    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 12:34:26.585552    8536 command_runner.go:130] > b5213941
	I0610 12:34:26.598017    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 12:34:26.631815    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7548.pem && ln -fs /usr/share/ca-certificates/7548.pem /etc/ssl/certs/7548.pem"
	I0610 12:34:26.666237    8536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7548.pem
	I0610 12:34:26.673134    8536 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:34:26.673318    8536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:41 /usr/share/ca-certificates/7548.pem
	I0610 12:34:26.685683    8536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7548.pem
	I0610 12:34:26.693635    8536 command_runner.go:130] > 51391683
	I0610 12:34:26.705414    8536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7548.pem /etc/ssl/certs/51391683.0"
	I0610 12:34:26.742680    8536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 12:34:26.750860    8536 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:34:26.750860    8536 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 12:34:26.750860    8536 kubeadm.go:928] updating node {m02 172.17.144.123 8443 v1.30.1 docker false true} ...
	I0610 12:34:26.751533    8536 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-813300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.144.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 12:34:26.764383    8536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 12:34:26.792261    8536 command_runner.go:130] > kubeadm
	I0610 12:34:26.792909    8536 command_runner.go:130] > kubectl
	I0610 12:34:26.792909    8536 command_runner.go:130] > kubelet
	I0610 12:34:26.792909    8536 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 12:34:26.804796    8536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0610 12:34:26.829209    8536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0610 12:34:26.862757    8536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 12:34:26.911979    8536 ssh_runner.go:195] Run: grep 172.17.150.144	control-plane.minikube.internal$ /etc/hosts
	I0610 12:34:26.919006    8536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.150.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:34:26.956971    8536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:34:27.176894    8536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:34:27.209587    8536 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:34:27.210435    8536 start.go:316] joinCluster: &{Name:multinode-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.150.144 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.144.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.144.46 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:34:27.210597    8536 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.17.144.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 12:34:27.210663    8536 host.go:66] Checking if "multinode-813300-m02" exists ...
	I0610 12:34:27.211184    8536 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:34:27.211850    8536 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:34:27.212425    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:34:29.561140    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:29.561203    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:29.561203    8536 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:34:29.561795    8536 api_server.go:166] Checking apiserver status ...
	I0610 12:34:29.572789    8536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:34:29.572789    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:34:31.904733    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:31.904733    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:31.904891    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:34.690377    8536 main.go:141] libmachine: [stdout =====>] : 172.17.150.144
	
	I0610 12:34:34.690377    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:34.691371    8536 sshutil.go:53] new ssh client: &{IP:172.17.150.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:34:34.816356    8536 command_runner.go:130] > 1892
	I0610 12:34:34.816356    8536 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.2435246s)
	I0610 12:34:34.829971    8536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1892/cgroup
	W0610 12:34:34.850979    8536 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1892/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 12:34:34.863836    8536 ssh_runner.go:195] Run: ls
	I0610 12:34:34.871466    8536 api_server.go:253] Checking apiserver healthz at https://172.17.150.144:8443/healthz ...
	I0610 12:34:34.878746    8536 api_server.go:279] https://172.17.150.144:8443/healthz returned 200:
	ok
	I0610 12:34:34.891725    8536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-813300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0610 12:34:35.074735    8536 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-r4nfq, kube-system/kube-proxy-rx2b2
	I0610 12:34:38.105435    8536 command_runner.go:130] > node/multinode-813300-m02 cordoned
	I0610 12:34:38.105435    8536 command_runner.go:130] > pod "busybox-fc5497c4f-czxmt" has DeletionTimestamp older than 1 seconds, skipping
	I0610 12:34:38.105435    8536 command_runner.go:130] > node/multinode-813300-m02 drained
	I0610 12:34:38.105626    8536 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-813300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2136847s)
	I0610 12:34:38.105626    8536 node.go:128] successfully drained node "multinode-813300-m02"
	I0610 12:34:38.105626    8536 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0610 12:34:38.105744    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:34:40.442810    8536 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:34:40.443106    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:40.443106    8536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:34:43.281170    8536 main.go:141] libmachine: [stdout =====>] : 172.17.144.123
	
	I0610 12:34:43.281170    8536 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:34:43.282609    8536 sshutil.go:53] new ssh client: &{IP:172.17.144.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	
	
	==> Docker <==
	Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981035932Z" level=warning msg="cleaning up after shim disconnected" id=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 namespace=moby
	Jun 10 12:31:31 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:31.981047633Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 12:31:31 multinode-813300 dockerd[1052]: time="2024-06-10T12:31:31.981399154Z" level=info msg="ignoring event" container=cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.486941957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487165464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.487187665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:31:47 multinode-813300 dockerd[1058]: time="2024-06-10T12:31:47.488142597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345354892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345592698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345620799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.345913706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.511059667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512286197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512437501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.512775109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/241c4811748facbb85003522d513039c3dfc5b38006b7f1cba90a5e411055e97/resolv.conf as [nameserver 172.17.144.1]"
	Jun 10 12:32:04 multinode-813300 cri-dockerd[1279]: time="2024-06-10T12:32:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4d124cebb3b3affe7ace090f1a152544207db26621b5b4098cad87e3db47a4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955148547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955266050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955283650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:32:04 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:04.955812861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444723816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444892597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.444914895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 12:32:05 multinode-813300 dockerd[1058]: time="2024-06-10T12:32:05.445846695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b9550940a81ca       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   c4d124cebb3b3       busybox-fc5497c4f-z28tq
	24f3f7e041f98       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   241c4811748fa       coredns-7db6d8ff4d-kbhvv
	e934ffe0f9032       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   2dd9b423841c9       storage-provisioner
	c3c4316beca64       ac1c61439df46                                                                                         4 minutes ago       Running             kindnet-cni               1                   0c19b39e15f6a       kindnet-29gbv
	cc9dbe4aa4005       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   2dd9b423841c9       storage-provisioner
	1de5fa0ef8384       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                1                   06d997d7c306c       kube-proxy-nrpvt
	d7941126134f2       91be940803172                                                                                         4 minutes ago       Running             kube-apiserver            0                   5c3da3b59b527       kube-apiserver-multinode-813300
	877ee07c14997       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   b13c0058ce265       etcd-multinode-813300
	d90e72ef46704       a52dc94f0a912                                                                                         4 minutes ago       Running             kube-scheduler            1                   8902dac03acbc       kube-scheduler-multinode-813300
	3bee53d5fef91       25a1387cdab82                                                                                         4 minutes ago       Running             kube-controller-manager   1                   f56cc8af37db0       kube-controller-manager-multinode-813300
	91782a06524c6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Exited              busybox                   0                   9ffef928b2474       busybox-fc5497c4f-z28tq
	f2e39052db195       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   a1ae7aed00678       coredns-7db6d8ff4d-kbhvv
	c39d54960e7d7       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Exited              kindnet-cni               0                   689b8976cc029       kindnet-29gbv
	afad8b05897e5       747097150317f                                                                                         26 minutes ago      Exited              kube-proxy                0                   62db1c721951a       kube-proxy-nrpvt
	bd1a6cd987430       a52dc94f0a912                                                                                         27 minutes ago      Exited              kube-scheduler            0                   e3b6aa9a0e1d1       kube-scheduler-multinode-813300
	f1409bf44ff14       25a1387cdab82                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   f04d7b3d4fcc6       kube-controller-manager-multinode-813300
	
	
	==> coredns [24f3f7e041f9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e3d924d2f9cb2f2956dedff645c9495c17be3ab7b70eb5a0ffdd24a8395f229ab08124b0b1f9a4357cb25bb028b359a0bf9b68adb3049f617b44b0512a1bc852
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34387 - 41508 "HINFO IN 7171992165040069679.5605173313288368349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051230172s
	
	
	==> coredns [f2e39052db19] <==
	[INFO] 10.244.0.3:44369 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095801s
	[INFO] 10.244.0.3:38578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001615s
	[INFO] 10.244.0.3:38593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002977s
	[INFO] 10.244.0.3:38526 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137201s
	[INFO] 10.244.0.3:48445 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001467s
	[INFO] 10.244.0.3:47462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000731s
	[INFO] 10.244.0.3:58225 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196101s
	[INFO] 10.244.1.2:35924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001833s
	[INFO] 10.244.1.2:51712 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001386s
	[INFO] 10.244.1.2:37161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007s
	[INFO] 10.244.1.2:37141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141s
	[INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001227s
	[INFO] 10.244.0.3:56133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000247001s
	[INFO] 10.244.0.3:48451 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000604s
	[INFO] 10.244.0.3:38368 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001264s
	[INFO] 10.244.1.2:44129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001056s
	[INFO] 10.244.1.2:34710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001955s
	[INFO] 10.244.1.2:59467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001589s
	[INFO] 10.244.1.2:53581 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002131s
	[INFO] 10.244.0.3:41745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	[INFO] 10.244.0.3:53512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001784s
	[INFO] 10.244.0.3:56441 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001208s
	[INFO] 10.244.0.3:55640 - 5 "PTR IN 1.144.17.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001199s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-813300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-813300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-813300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T12_08_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:07:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-813300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:35:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:31:40 +0000   Mon, 10 Jun 2024 12:31:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.150.144
	  Hostname:    multinode-813300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 8363a852b0fa420a8dccb009e6f4f9c7
	  System UUID:                5734c1ff-f59b-f647-9c36-fb6d9a8cd541
	  Boot ID:                    a60b688f-6b78-4fa5-b21e-96a64e5c1047
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-z28tq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-kbhvv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-813300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m15s
	  kube-system                 kindnet-29gbv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-813300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-controller-manager-multinode-813300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-nrpvt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-813300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 4m12s                  kube-proxy       
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-813300 status is now: NodeReady
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node multinode-813300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node multinode-813300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                   node-controller  Node multinode-813300 event: Registered Node multinode-813300 in Controller
	
	
	Name:               multinode-813300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-813300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-813300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T12_11_29_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:11:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	                    node.kubernetes.io/unschedulable:NoSchedule
	Unschedulable:      true
	Lease:
	  HolderIdentity:  multinode-813300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:27:30 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Jun 2024 12:22:42 +0000   Mon, 10 Jun 2024 12:28:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.151.128
	  Hostname:    multinode-813300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d46b791e8a04ff7a071c88405a5a4eb
	  System UUID:                e053fc34-e8e5-6649-afc7-f62c0d458753
	  Boot ID:                    a3528c50-da8b-4321-8198-65ea5eca732a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-czxmt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-r4nfq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-rx2b2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node multinode-813300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node multinode-813300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-813300-m02 status is now: NodeReady
	  Normal  NodeNotReady             7m                 node-controller  Node multinode-813300-m02 status is now: NodeNotReady
	  Normal  RegisteredNode           4m3s               node-controller  Node multinode-813300-m02 event: Registered Node multinode-813300-m02 in Controller
	
	
	Name:               multinode-813300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-813300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-813300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T12_25_53_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:25:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-813300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:27:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Jun 2024 12:26:23 +0000   Mon, 10 Jun 2024 12:27:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.144.46
	  Hostname:    multinode-813300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d60e1f6e3b2454db505a650eae61212
	  System UUID:                b38b4a9a-39f6-6f43-9e6d-19433dc62cd9
	  Boot ID:                    0a419483-5289-4d17-96c2-fd4487360412
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2pc4j       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m23s
	  kube-system                 kube-proxy-vw56h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m23s (x2 over 9m23s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x2 over 9m23s)  kubelet          Node multinode-813300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x2 over 9m23s)  kubelet          Node multinode-813300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m21s                  node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	  Normal  NodeReady                9m2s                   kubelet          Node multinode-813300-m03 status is now: NodeReady
	  Normal  NodeNotReady             7m31s                  node-controller  Node multinode-813300-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           4m3s                   node-controller  Node multinode-813300-m03 event: Registered Node multinode-813300-m03 in Controller
	
	
	==> dmesg <==
	[  +5.764981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.334692] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.227872] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.275008] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun10 12:30] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.213819] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[ +29.247267] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	[  +0.109477] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.638576] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.214581] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	[  +0.255487] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +3.027967] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.239865] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.216732] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	[  +0.314976] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +0.112938] kauditd_printk_skb: 183 callbacks suppressed
	[  +0.871081] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	[  +5.053506] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +0.123809] kauditd_printk_skb: 34 callbacks suppressed
	[Jun10 12:31] kauditd_printk_skb: 62 callbacks suppressed
	[  +3.513215] hrtimer: interrupt took 368589 ns
	[  +0.107277] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	[  +7.541664] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [877ee07c1499] <==
	{"level":"info","ts":"2024-06-10T12:30:56.361057Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T12:30:56.361302Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T12:30:56.363117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d switched to configuration voters=(10323449867154160525)"}
	{"level":"info","ts":"2024-06-10T12:30:56.363612Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","added-peer-id":"8f4442f54c46fb8d","added-peer-peer-urls":["https://172.17.159.171:2380"]}
	{"level":"info","ts":"2024-06-10T12:30:56.364067Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ede117c4f607edf2","local-member-id":"8f4442f54c46fb8d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:30:56.364306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T12:30:56.367772Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-10T12:30:56.373962Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.17.150.144:2380"}
	{"level":"info","ts":"2024-06-10T12:30:56.374209Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.17.150.144:2380"}
	{"level":"info","ts":"2024-06-10T12:30:56.375497Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8f4442f54c46fb8d","initial-advertise-peer-urls":["https://172.17.150.144:2380"],"listen-peer-urls":["https://172.17.150.144:2380"],"advertise-client-urls":["https://172.17.150.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.150.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-10T12:30:56.375805Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-10T12:30:57.505031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-10T12:30:57.50539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-10T12:30:57.505605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgPreVoteResp from 8f4442f54c46fb8d at term 2"}
	{"level":"info","ts":"2024-06-10T12:30:57.505801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became candidate at term 3"}
	{"level":"info","ts":"2024-06-10T12:30:57.506022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d received MsgVoteResp from 8f4442f54c46fb8d at term 3"}
	{"level":"info","ts":"2024-06-10T12:30:57.506285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8f4442f54c46fb8d became leader at term 3"}
	{"level":"info","ts":"2024-06-10T12:30:57.506586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8f4442f54c46fb8d elected leader 8f4442f54c46fb8d at term 3"}
	{"level":"info","ts":"2024-06-10T12:30:57.511486Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8f4442f54c46fb8d","local-member-attributes":"{Name:multinode-813300 ClientURLs:[https://172.17.150.144:2379]}","request-path":"/0/members/8f4442f54c46fb8d/attributes","cluster-id":"ede117c4f607edf2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T12:30:57.512441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T12:30:57.512682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T12:30:57.517481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T12:30:57.520873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T12:30:57.520973Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T12:30:57.543402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.150.144:2379"}
	
	
	==> kernel <==
	 12:35:15 up 6 min,  0 users,  load average: 0.83, 0.52, 0.24
	Linux multinode-813300 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c39d54960e7d] <==
	I0610 12:27:26.945625       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:27:36.955188       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:27:36.955329       1 main.go:227] handling current node
	I0610 12:27:36.955462       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:27:36.955581       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:27:36.955956       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:27:36.956158       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:27:46.965590       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:27:46.965717       1 main.go:227] handling current node
	I0610 12:27:46.965826       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:27:46.965836       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:27:46.966598       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:27:46.966708       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:27:56.999276       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:27:56.999553       1 main.go:227] handling current node
	I0610 12:27:56.999711       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:27:56.999728       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:27:57.000088       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:27:57.000177       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:28:07.015069       1 main.go:223] Handling node with IPs: map[172.17.159.171:{}]
	I0610 12:28:07.015281       1 main.go:227] handling current node
	I0610 12:28:07.015300       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:28:07.015308       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:28:07.015707       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:28:07.015928       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [c3c4316beca6] <==
	I0610 12:34:33.017910       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:34:43.032075       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:34:43.032123       1 main.go:227] handling current node
	I0610 12:34:43.032138       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:34:43.032145       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:34:43.032341       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:34:43.032692       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:34:53.046159       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:34:53.046231       1 main.go:227] handling current node
	I0610 12:34:53.046247       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:34:53.046254       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:34:53.046958       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:34:53.047078       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:35:03.054795       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:35:03.054973       1 main.go:227] handling current node
	I0610 12:35:03.055078       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:35:03.055186       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:35:03.055494       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:35:03.056267       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	I0610 12:35:13.068936       1 main.go:223] Handling node with IPs: map[172.17.150.144:{}]
	I0610 12:35:13.069044       1 main.go:227] handling current node
	I0610 12:35:13.069060       1 main.go:223] Handling node with IPs: map[172.17.151.128:{}]
	I0610 12:35:13.069068       1 main.go:250] Node multinode-813300-m02 has CIDR [10.244.1.0/24] 
	I0610 12:35:13.069661       1 main.go:223] Handling node with IPs: map[172.17.144.46:{}]
	I0610 12:35:13.069735       1 main.go:250] Node multinode-813300-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d7941126134f] <==
	I0610 12:30:59.524664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 12:30:59.525326       1 policy_source.go:224] refreshing policies
	I0610 12:30:59.543486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 12:30:59.547084       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 12:30:59.548579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 12:30:59.549972       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 12:30:59.550011       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 12:30:59.551151       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 12:30:59.554229       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 12:30:59.560228       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 12:30:59.578343       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 12:30:59.578414       1 aggregator.go:165] initial CRD sync complete...
	I0610 12:30:59.578429       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 12:30:59.578437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 12:30:59.578466       1 cache.go:39] Caches are synced for autoregister controller
	I0610 12:30:59.606740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 12:31:00.360768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0610 12:31:00.893787       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.150.144]
	I0610 12:31:00.913283       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 12:31:00.933946       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 12:31:02.471259       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 12:31:02.690867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 12:31:02.714405       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 12:31:02.840117       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 12:31:02.856715       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [3bee53d5fef9] <==
	I0610 12:31:12.061957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.647762ms"
	I0610 12:31:12.062771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="326.05µs"
	I0610 12:31:12.074892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:31:12.074973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300"
	I0610 12:31:12.075004       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:31:12.075594       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0610 12:31:12.130853       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:31:12.140823       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0610 12:31:12.147492       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 12:31:12.174418       1 shared_informer.go:320] Caches are synced for disruption
	I0610 12:31:12.201305       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 12:31:12.218626       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0610 12:31:12.243193       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 12:31:12.658052       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:31:12.658432       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 12:31:12.674720       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 12:31:42.085794       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:32:06.626500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.481917ms"
	I0610 12:32:06.626834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.891µs"
	I0610 12:32:06.653330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="217.376µs"
	I0610 12:32:06.704393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.856077ms"
	I0610 12:32:06.705453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.995µs"
	I0610 12:34:35.125375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.155664ms"
	I0610 12:34:35.139621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.15179ms"
	I0610 12:34:35.140212       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="526.888µs"
	
	
	==> kube-controller-manager [f1409bf44ff1] <==
	I0610 12:08:32.538906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="180.301µs"
	I0610 12:08:32.610537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.137489ms"
	I0610 12:08:32.611020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5µs"
	I0610 12:08:34.635560       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0610 12:11:28.859639       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m02\" does not exist"
	I0610 12:11:28.879298       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m02" podCIDRs=["10.244.1.0/24"]
	I0610 12:11:29.670639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m02"
	I0610 12:11:51.574110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:12:19.785464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.490556ms"
	I0610 12:12:19.804051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.524284ms"
	I0610 12:12:19.806222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.9µs"
	I0610 12:12:19.813010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.401µs"
	I0610 12:12:19.818841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.9µs"
	I0610 12:12:22.803157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.023114ms"
	I0610 12:12:22.803959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.7µs"
	I0610 12:12:23.117968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.704624ms"
	I0610 12:12:23.118507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.5µs"
	I0610 12:25:52.678571       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-813300-m03\" does not exist"
	I0610 12:25:52.681612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:25:52.698797       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-813300-m03" podCIDRs=["10.244.2.0/24"]
	I0610 12:25:54.878967       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-813300-m03"
	I0610 12:26:13.380155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:27:44.944679       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-813300-m02"
	I0610 12:28:15.516170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.644756ms"
	I0610 12:28:15.516815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.1µs"
	
	
	==> kube-proxy [1de5fa0ef838] <==
	I0610 12:31:02.254962       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:31:02.294630       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.150.144"]
	I0610 12:31:02.403290       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:31:02.403338       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:31:02.403357       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:31:02.416009       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:31:02.416300       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:31:02.416345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:31:02.424657       1 config.go:192] "Starting service config controller"
	I0610 12:31:02.425325       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:31:02.425369       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:31:02.425382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:31:02.432037       1 config.go:319] "Starting node config controller"
	I0610 12:31:02.432075       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:31:02.535663       1 shared_informer.go:320] Caches are synced for node config
	I0610 12:31:02.535744       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:31:02.535786       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [afad8b05897e] <==
	I0610 12:08:17.787330       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:08:17.815813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.17.159.171"]
	I0610 12:08:17.929231       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:08:17.929304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:08:17.929325       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:08:17.933115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:08:17.933534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:08:17.933681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:08:17.935227       1 config.go:192] "Starting service config controller"
	I0610 12:08:17.935260       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:08:17.935291       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:08:17.935297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:08:17.937731       1 config.go:319] "Starting node config controller"
	I0610 12:08:17.938095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:08:18.035433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:08:18.035502       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:08:18.038590       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd1a6cd98743] <==
	E0610 12:07:58.427119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.503514       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 12:07:58.503568       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 12:07:58.610877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 12:07:58.611650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 12:07:58.611603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 12:07:58.612141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 12:07:58.614694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 12:07:58.614992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 12:07:58.752570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.752635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.810605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 12:07:58.810721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 12:07:58.815170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 12:07:58.815852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 12:07:58.816493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 12:07:58.816687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 12:07:58.834947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 12:07:58.836145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 12:07:58.838693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 12:07:58.838938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 12:07:58.897162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 12:07:58.897200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0610 12:08:01.565495       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 12:28:16.298586       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d90e72ef4670] <==
	I0610 12:30:56.811878       1 serving.go:380] Generated self-signed cert in-memory
	W0610 12:30:59.481898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 12:30:59.482123       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 12:30:59.482217       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 12:30:59.482255       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 12:30:59.514164       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 12:30:59.514266       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:30:59.518405       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 12:30:59.518496       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 12:30:59.518958       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 12:30:59.519337       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 12:30:59.619122       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:31:32 multinode-813300 kubelet[1528]: E0610 12:31:32.477659    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f6dfedc3-d6ff-412c-8a13-40a493c4199e)\"" pod="kube-system/storage-provisioner" podUID="f6dfedc3-d6ff-412c-8a13-40a493c4199e"
	Jun 10 12:31:33 multinode-813300 kubelet[1528]: E0610 12:31:33.232631    1528 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-kbhvv" podUID="c9da505f-fd4e-4c29-ad69-3b5ac1e51e98"
	Jun 10 12:31:47 multinode-813300 kubelet[1528]: I0610 12:31:47.231895    1528 scope.go:117] "RemoveContainer" containerID="cc9dbe4aa4005155b3d320cbe8fe870629663d1df246c27fe5bf3467186eeae8"
	Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.214930    1528 scope.go:117] "RemoveContainer" containerID="34b9299d74e382eadb8e7df1029506efc87e283ac8b38024d9524b8bb815f705"
	Jun 10 12:31:54 multinode-813300 kubelet[1528]: E0610 12:31:54.266189    1528 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:31:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:31:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:31:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:31:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:31:54 multinode-813300 kubelet[1528]: I0610 12:31:54.275663    1528 scope.go:117] "RemoveContainer" containerID="ba52603f8387590319a4d5a9511265065e2f90bff6628bec2f622754e034c70a"
	Jun 10 12:32:54 multinode-813300 kubelet[1528]: E0610 12:32:54.266526    1528 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:32:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:32:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:32:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:32:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:33:54 multinode-813300 kubelet[1528]: E0610 12:33:54.267977    1528 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:33:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:33:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:33:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:33:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:34:54 multinode-813300 kubelet[1528]: E0610 12:34:54.266128    1528 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:34:54 multinode-813300 kubelet[1528]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:34:54 multinode-813300 kubelet[1528]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:34:54 multinode-813300 kubelet[1528]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:34:54 multinode-813300 kubelet[1528]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:35:03.524804    2180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-813300 -n multinode-813300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-813300 -n multinode-813300: (13.1000755s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-813300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-wqqvm
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-813300 describe pod busybox-fc5497c4f-wqqvm
helpers_test.go:282: (dbg) kubectl --context multinode-813300 describe pod busybox-fc5497c4f-wqqvm:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-wqqvm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gzvx7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-gzvx7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  61s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (521.98s)

                                                
                                    
x
+
TestPreload (575.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-605700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0610 12:38:17.607942    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:39:41.883587    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-605700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m13.1943602s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-605700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-605700 image pull gcr.io/k8s-minikube/busybox: (9.6203677s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-605700
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-605700: (42.0195947s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-605700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0610 12:43:00.786886    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:43:17.613056    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:44:41.890140    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-605700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: exit status 90 (3m15.2786211s)

                                                
                                                
-- stdout --
	* [test-preload-605700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the hyperv driver based on existing profile
	* Starting "test-preload-605700" primary control-plane node in "test-preload-605700" cluster
	* Downloading Kubernetes v1.24.4 preload ...
	* Restarting existing hyperv VM for "test-preload-605700" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:42:56.680916    4240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0610 12:42:56.759976    4240 out.go:291] Setting OutFile to fd 612 ...
	I0610 12:42:56.761267    4240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:42:56.761267    4240 out.go:304] Setting ErrFile to fd 740...
	I0610 12:42:56.761267    4240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:42:56.788293    4240 out.go:298] Setting JSON to false
	I0610 12:42:56.791784    4240 start.go:129] hostinfo: {"hostname":"minikube6","uptime":23265,"bootTime":1718000111,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 12:42:56.791784    4240 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 12:42:56.980405    4240 out.go:177] * [test-preload-605700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 12:42:56.993784    4240 notify.go:220] Checking for updates...
	I0610 12:42:57.002047    4240 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 12:42:57.031519    4240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:42:57.037789    4240 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 12:42:57.131879    4240 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:42:57.235636    4240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:42:57.289374    4240 config.go:182] Loaded profile config "test-preload-605700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0610 12:42:57.328331    4240 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0610 12:42:57.377057    4240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:43:03.379480    4240 out.go:177] * Using the hyperv driver based on existing profile
	I0610 12:43:03.534159    4240 start.go:297] selected driver: hyperv
	I0610 12:43:03.534159    4240 start.go:901] validating driver "hyperv" against &{Name:test-preload-605700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-605700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.153.117 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:43:03.535310    4240 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:43:03.591544    4240 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:43:03.591710    4240 cni.go:84] Creating CNI manager for ""
	I0610 12:43:03.591710    4240 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 12:43:03.591819    4240 start.go:340] cluster config:
	{Name:test-preload-605700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-605700 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.153.117 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:43:03.592230    4240 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:43:03.778785    4240 out.go:177] * Starting "test-preload-605700" primary control-plane node in "test-preload-605700" cluster
	I0610 12:43:03.785260    4240 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0610 12:43:03.833898    4240 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0610 12:43:03.834015    4240 cache.go:56] Caching tarball of preloaded images
	I0610 12:43:03.834558    4240 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0610 12:43:03.881188    4240 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0610 12:43:03.884409    4240 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0610 12:43:03.956631    4240 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4?checksum=md5:20cbd62a1b5d1968f21881a4a0f4f59e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0610 12:43:09.019769    4240 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0610 12:43:09.021066    4240 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0610 12:43:10.185906    4240 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on docker
	I0610 12:43:10.186690    4240 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\test-preload-605700\config.json ...
	I0610 12:43:10.188941    4240 start.go:360] acquireMachinesLock for test-preload-605700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:43:10.188941    4240 start.go:364] duration metric: took 0s to acquireMachinesLock for "test-preload-605700"
	I0610 12:43:10.188941    4240 start.go:96] Skipping create...Using existing machine configuration
	I0610 12:43:10.188941    4240 fix.go:54] fixHost starting: 
	I0610 12:43:10.190018    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:43:13.315005    4240 main.go:141] libmachine: [stdout =====>] : Off
	
	I0610 12:43:13.315145    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:13.315145    4240 fix.go:112] recreateIfNeeded on test-preload-605700: state=Stopped err=<nil>
	W0610 12:43:13.315206    4240 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 12:43:13.321212    4240 out.go:177] * Restarting existing hyperv VM for "test-preload-605700" ...
	I0610 12:43:13.323191    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM test-preload-605700
	I0610 12:43:16.732812    4240 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:43:16.733025    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:16.733025    4240 main.go:141] libmachine: Waiting for host to start...
	I0610 12:43:16.733053    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:43:19.185338    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:43:19.185644    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:19.185778    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:43:21.899158    4240 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:43:21.900209    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:22.912378    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:43:25.333973    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:43:25.333973    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:25.333973    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:43:28.121520    4240 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:43:28.121520    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:29.123371    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:43:31.514639    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:43:31.515308    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:31.515308    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:43:34.268623    4240 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:43:34.268623    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:35.275354    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:43:37.611425    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:43:37.611425    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:37.611994    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:43:40.370993    4240 main.go:141] libmachine: [stdout =====>] : 
	I0610 12:43:40.370993    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:41.383789    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:43:43.790055    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:43:43.790055    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:43.790511    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:43:46.774752    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:43:46.777852    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:46.781301    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:43:49.274320    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:43:49.274320    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:49.274320    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:43:52.212619    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:43:52.212619    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:52.213865    4240 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\test-preload-605700\config.json ...
	I0610 12:43:52.215950    4240 machine.go:94] provisionDockerMachine start ...
	I0610 12:43:52.216481    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:43:54.662816    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:43:54.662816    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:54.663312    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:43:57.542616    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:43:57.542616    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:43:57.549133    4240 main.go:141] libmachine: Using SSH client type: native
	I0610 12:43:57.549265    4240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.148.206 22 <nil> <nil>}
	I0610 12:43:57.549265    4240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 12:43:57.696019    4240 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 12:43:57.696019    4240 buildroot.go:166] provisioning hostname "test-preload-605700"
	I0610 12:43:57.696801    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:00.058465    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:00.058465    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:00.058465    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:02.805068    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:02.805131    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:02.810219    4240 main.go:141] libmachine: Using SSH client type: native
	I0610 12:44:02.810931    4240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.148.206 22 <nil> <nil>}
	I0610 12:44:02.810931    4240 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-605700 && echo "test-preload-605700" | sudo tee /etc/hostname
	I0610 12:44:02.982823    4240 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-605700
	
	I0610 12:44:02.982924    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:05.245113    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:05.245113    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:05.245113    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:08.078663    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:08.078663    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:08.084857    4240 main.go:141] libmachine: Using SSH client type: native
	I0610 12:44:08.085254    4240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.148.206 22 <nil> <nil>}
	I0610 12:44:08.085254    4240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-605700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-605700/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-605700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:44:08.255258    4240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:44:08.255322    4240 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0610 12:44:08.255463    4240 buildroot.go:174] setting up certificates
	I0610 12:44:08.255516    4240 provision.go:84] configureAuth start
	I0610 12:44:08.255637    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:10.557801    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:10.557981    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:10.558100    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:13.370645    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:13.371310    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:13.371310    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:15.746284    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:15.746284    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:15.746688    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:18.470429    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:18.470891    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:18.470891    4240 provision.go:143] copyHostCerts
	I0610 12:44:18.471886    4240 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0610 12:44:18.471886    4240 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0610 12:44:18.472699    4240 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0610 12:44:18.474719    4240 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0610 12:44:18.474822    4240 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0610 12:44:18.475385    4240 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0610 12:44:18.475780    4240 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0610 12:44:18.476824    4240 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0610 12:44:18.477359    4240 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0610 12:44:18.478715    4240 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.test-preload-605700 san=[127.0.0.1 172.17.148.206 localhost minikube test-preload-605700]
	I0610 12:44:18.681498    4240 provision.go:177] copyRemoteCerts
	I0610 12:44:18.693625    4240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:44:18.693727    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:20.961368    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:20.961467    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:20.961526    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:23.732964    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:23.732964    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:23.732964    4240 sshutil.go:53] new ssh client: &{IP:172.17.148.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-605700\id_rsa Username:docker}
	I0610 12:44:23.842595    4240 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1488264s)
	I0610 12:44:23.842595    4240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:44:23.893845    4240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 12:44:23.948917    4240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 12:44:23.995023    4240 provision.go:87] duration metric: took 15.7393195s to configureAuth
	I0610 12:44:23.995023    4240 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:44:23.995771    4240 config.go:182] Loaded profile config "test-preload-605700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0610 12:44:23.995882    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:26.405650    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:26.405650    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:26.405650    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:29.214544    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:29.214544    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:29.220229    4240 main.go:141] libmachine: Using SSH client type: native
	I0610 12:44:29.220635    4240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.148.206 22 <nil> <nil>}
	I0610 12:44:29.220635    4240 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 12:44:29.370603    4240 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 12:44:29.370603    4240 buildroot.go:70] root file system type: tmpfs
	I0610 12:44:29.370603    4240 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 12:44:29.370603    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:31.761596    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:31.761890    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:31.761890    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:34.584263    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:34.584263    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:34.590429    4240 main.go:141] libmachine: Using SSH client type: native
	I0610 12:44:34.591010    4240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.148.206 22 <nil> <nil>}
	I0610 12:44:34.591156    4240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 12:44:34.767200    4240 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 12:44:34.767200    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:37.082861    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:37.083108    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:37.083108    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:39.853836    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:39.854533    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:39.861203    4240 main.go:141] libmachine: Using SSH client type: native
	I0610 12:44:39.861203    4240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.148.206 22 <nil> <nil>}
	I0610 12:44:39.861203    4240 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 12:44:42.381853    4240 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 12:44:42.381908    4240 machine.go:97] duration metric: took 50.165556s to provisionDockerMachine
	I0610 12:44:42.381942    4240 start.go:293] postStartSetup for "test-preload-605700" (driver="hyperv")
	I0610 12:44:42.381942    4240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:44:42.394481    4240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:44:42.395527    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:44.707567    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:44.707567    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:44.707567    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:47.454159    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:47.454580    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:47.454796    4240 sshutil.go:53] new ssh client: &{IP:172.17.148.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-605700\id_rsa Username:docker}
	I0610 12:44:47.571726    4240 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1761582s)
	I0610 12:44:47.584200    4240 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:44:47.592261    4240 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:44:47.592261    4240 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0610 12:44:47.592897    4240 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0610 12:44:47.594157    4240 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem -> 75482.pem in /etc/ssl/certs
	I0610 12:44:47.606783    4240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:44:47.629291    4240 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\75482.pem --> /etc/ssl/certs/75482.pem (1708 bytes)
	I0610 12:44:47.676689    4240 start.go:296] duration metric: took 5.2947043s for postStartSetup
	I0610 12:44:47.676689    4240 fix.go:56] duration metric: took 1m37.4869678s for fixHost
	I0610 12:44:47.676689    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:49.985677    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:49.985677    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:49.986502    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:52.758245    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:52.759305    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:52.764333    4240 main.go:141] libmachine: Using SSH client type: native
	I0610 12:44:52.765038    4240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.148.206 22 <nil> <nil>}
	I0610 12:44:52.765038    4240 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 12:44:52.918700    4240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718023492.912091560
	
	I0610 12:44:52.918811    4240 fix.go:216] guest clock: 1718023492.912091560
	I0610 12:44:52.918811    4240 fix.go:229] Guest: 2024-06-10 12:44:52.91209156 +0000 UTC Remote: 2024-06-10 12:44:47.6766896 +0000 UTC m=+111.087510801 (delta=5.23540196s)
	I0610 12:44:52.918811    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:44:55.235581    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:44:55.236341    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:55.236419    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:44:57.949029    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:44:57.949029    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:44:57.954919    4240 main.go:141] libmachine: Using SSH client type: native
	I0610 12:44:57.955501    4240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x85a540] 0x85d120 <nil>  [] 0s} 172.17.148.206 22 <nil> <nil>}
	I0610 12:44:57.955501    4240 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718023492
	I0610 12:44:58.110902    4240 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 10 12:44:52 UTC 2024
	
	I0610 12:44:58.110902    4240 fix.go:236] clock set: Mon Jun 10 12:44:52 UTC 2024
	 (err=<nil>)
	I0610 12:44:58.110902    4240 start.go:83] releasing machines lock for "test-preload-605700", held for 1m47.9210975s
	I0610 12:44:58.110902    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:45:00.416082    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:45:00.416729    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:45:00.416906    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:45:03.190575    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:45:03.190575    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:45:03.194879    4240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:45:03.195060    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:45:03.204939    4240 ssh_runner.go:195] Run: cat /version.json
	I0610 12:45:03.204939    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-605700 ).state
	I0610 12:45:05.585001    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:45:05.585001    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:45:05.585001    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:45:05.602304    4240 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:45:05.602304    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:45:05.602304    4240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-605700 ).networkadapters[0]).ipaddresses[0]
	I0610 12:45:08.434834    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:45:08.435727    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:45:08.436156    4240 sshutil.go:53] new ssh client: &{IP:172.17.148.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-605700\id_rsa Username:docker}
	I0610 12:45:08.466528    4240 main.go:141] libmachine: [stdout =====>] : 172.17.148.206
	
	I0610 12:45:08.466646    4240 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:45:08.467102    4240 sshutil.go:53] new ssh client: &{IP:172.17.148.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-605700\id_rsa Username:docker}
	I0610 12:45:08.607986    4240 ssh_runner.go:235] Completed: cat /version.json: (5.4029425s)
	I0610 12:45:08.607986    4240 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4130024s)
	I0610 12:45:08.621007    4240 ssh_runner.go:195] Run: systemctl --version
	I0610 12:45:08.642630    4240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 12:45:08.650963    4240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:45:08.663344    4240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:45:08.695899    4240 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:45:08.695899    4240 start.go:494] detecting cgroup driver to use...
	I0610 12:45:08.695899    4240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:45:08.746632    4240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0610 12:45:08.783691    4240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 12:45:08.804716    4240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 12:45:08.816051    4240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 12:45:08.851187    4240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:45:08.889603    4240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 12:45:08.923580    4240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 12:45:08.957263    4240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:45:08.992769    4240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 12:45:09.025513    4240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 12:45:09.060274    4240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 12:45:09.097481    4240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:45:09.128192    4240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:45:09.161845    4240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:45:09.367890    4240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 12:45:09.404090    4240 start.go:494] detecting cgroup driver to use...
	I0610 12:45:09.415793    4240 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 12:45:09.453971    4240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:45:09.492969    4240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:45:09.554494    4240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:45:09.593853    4240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:45:09.636891    4240 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 12:45:09.705782    4240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 12:45:09.734333    4240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:45:09.792098    4240 ssh_runner.go:195] Run: which cri-dockerd
	I0610 12:45:09.816759    4240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 12:45:09.840876    4240 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 12:45:09.894376    4240 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 12:45:10.118971    4240 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 12:45:10.315426    4240 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 12:45:10.315677    4240 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 12:45:10.364225    4240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:45:10.589202    4240 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 12:46:11.734157    4240 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1435236s)
	I0610 12:46:11.748629    4240 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0610 12:46:11.784328    4240 out.go:177] 
	W0610 12:46:11.787672    4240 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 10 12:44:40 test-preload-605700 systemd[1]: Starting Docker Application Container Engine...
	Jun 10 12:44:40 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:40.514648756Z" level=info msg="Starting up"
	Jun 10 12:44:40 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:40.515887141Z" level=info msg="containerd not running, starting managed containerd"
	Jun 10 12:44:40 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:40.519874894Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.563403772Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.591676933Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.591813231Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.592003529Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.592128028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.593121116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.593268414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.594202803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.594314101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.594401200Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.594506399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.595018193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.596577674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.599808136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.599861735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600094532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600242530Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600743724Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600858223Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600877523Z" level=info msg="metadata content store policy set" policy=shared
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613807968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613886667Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613955566Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613975666Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613994665Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614097564Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614440360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614616058Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614641058Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614728357Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614827755Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614851355Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614865955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614880955Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614898255Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614913254Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614938554Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614957254Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614991754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615023153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615041253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615056053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615069353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615091152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615104352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615118352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615172451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615198051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615211451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615242551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615255450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615274350Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615296850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615315550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615332349Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615384349Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615488048Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615560247Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615585346Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615604246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615619646Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615631246Z" level=info msg="NRI interface is disabled by configuration."
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.616276138Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.616823332Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.617035129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.617523323Z" level=info msg="containerd successfully booted in 0.057963s"
	Jun 10 12:44:41 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:41.581032453Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 10 12:44:41 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:41.722762560Z" level=info msg="Loading containers: start."
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.184303223Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.285445298Z" level=info msg="Loading containers: done."
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.310737967Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.311522794Z" level=info msg="Daemon has completed initialization"
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.371050840Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.371417552Z" level=info msg="API listen on [::]:2376"
	Jun 10 12:44:42 test-preload-605700 systemd[1]: Started Docker Application Container Engine.
	Jun 10 12:45:10 test-preload-605700 systemd[1]: Stopping Docker Application Container Engine...
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.618255491Z" level=info msg="Processing signal 'terminated'"
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.620751103Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.622144165Z" level=info msg="Daemon shutdown complete"
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.622273671Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.622309773Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 10 12:45:11 test-preload-605700 systemd[1]: docker.service: Deactivated successfully.
	Jun 10 12:45:11 test-preload-605700 systemd[1]: Stopped Docker Application Container Engine.
	Jun 10 12:45:11 test-preload-605700 systemd[1]: Starting Docker Application Container Engine...
	Jun 10 12:45:11 test-preload-605700 dockerd[1049]: time="2024-06-10T12:45:11.703040088Z" level=info msg="Starting up"
	Jun 10 12:46:11 test-preload-605700 dockerd[1049]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 10 12:46:11 test-preload-605700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 10 12:46:11 test-preload-605700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 10 12:46:11 test-preload-605700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 10 12:44:40 test-preload-605700 systemd[1]: Starting Docker Application Container Engine...
	Jun 10 12:44:40 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:40.514648756Z" level=info msg="Starting up"
	Jun 10 12:44:40 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:40.515887141Z" level=info msg="containerd not running, starting managed containerd"
	Jun 10 12:44:40 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:40.519874894Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.563403772Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.591676933Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.591813231Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.592003529Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.592128028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.593121116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.593268414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.594202803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.594314101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.594401200Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.594506399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.595018193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.596577674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.599808136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.599861735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600094532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600242530Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600743724Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600858223Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.600877523Z" level=info msg="metadata content store policy set" policy=shared
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613807968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613886667Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613955566Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613975666Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.613994665Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614097564Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614440360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614616058Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614641058Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614728357Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614827755Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614851355Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614865955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614880955Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614898255Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614913254Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614938554Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614957254Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.614991754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615023153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615041253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615056053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615069353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615091152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615104352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615118352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615172451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615198051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615211451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615242551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615255450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615274350Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615296850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615315550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615332349Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615384349Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615488048Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615560247Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615585346Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615604246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615619646Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.615631246Z" level=info msg="NRI interface is disabled by configuration."
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.616276138Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.616823332Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.617035129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 10 12:44:40 test-preload-605700 dockerd[665]: time="2024-06-10T12:44:40.617523323Z" level=info msg="containerd successfully booted in 0.057963s"
	Jun 10 12:44:41 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:41.581032453Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 10 12:44:41 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:41.722762560Z" level=info msg="Loading containers: start."
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.184303223Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.285445298Z" level=info msg="Loading containers: done."
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.310737967Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.311522794Z" level=info msg="Daemon has completed initialization"
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.371050840Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 10 12:44:42 test-preload-605700 dockerd[659]: time="2024-06-10T12:44:42.371417552Z" level=info msg="API listen on [::]:2376"
	Jun 10 12:44:42 test-preload-605700 systemd[1]: Started Docker Application Container Engine.
	Jun 10 12:45:10 test-preload-605700 systemd[1]: Stopping Docker Application Container Engine...
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.618255491Z" level=info msg="Processing signal 'terminated'"
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.620751103Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.622144165Z" level=info msg="Daemon shutdown complete"
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.622273671Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 10 12:45:10 test-preload-605700 dockerd[659]: time="2024-06-10T12:45:10.622309773Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 10 12:45:11 test-preload-605700 systemd[1]: docker.service: Deactivated successfully.
	Jun 10 12:45:11 test-preload-605700 systemd[1]: Stopped Docker Application Container Engine.
	Jun 10 12:45:11 test-preload-605700 systemd[1]: Starting Docker Application Container Engine...
	Jun 10 12:45:11 test-preload-605700 dockerd[1049]: time="2024-06-10T12:45:11.703040088Z" level=info msg="Starting up"
	Jun 10 12:46:11 test-preload-605700 dockerd[1049]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 10 12:46:11 test-preload-605700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 10 12:46:11 test-preload-605700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 10 12:46:11 test-preload-605700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0610 12:46:11.788310    4240 out.go:239] * 
	* 
	W0610 12:46:11.789888    4240 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 12:46:11.793339    4240 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:68: out/minikube-windows-amd64.exe start -p test-preload-605700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv failed: exit status 90
panic.go:626: *** TestPreload FAILED at 2024-06-10 12:46:12.0043444 +0000 UTC m=+8694.964093201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-605700 -n test-preload-605700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-605700 -n test-preload-605700: exit status 6 (12.9359569s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:46:12.139436    9116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0610 12:46:24.883866    9116 status.go:417] kubeconfig endpoint: get endpoint: "test-preload-605700" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-605700" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-605700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-605700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-605700: (1m1.9931857s)
--- FAIL: TestPreload (575.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-157300 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-157300 --driver=hyperv: exit status 1 (4m59.5894924s)

                                                
                                                
-- stdout --
	* [NoKubernetes-157300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-157300" primary control-plane node in "NoKubernetes-157300" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:53:15.858866    8920 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-157300 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-157300 -n NoKubernetes-157300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-157300 -n NoKubernetes-157300: exit status 7 (243.5234ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:58:15.426928    9860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-157300" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10800.509s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-764400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.20.0
panic: test timed out after 3h0m0s
running tests:
	TestCertOptions (3m32s)
	TestForceSystemdEnv (5m45s)
	TestNetworkPlugins (6m0s)
	TestPause (2m29s)
	TestPause/serial (2m29s)
	TestPause/serial/Start (2m29s)
	TestStartStop (20m53s)
	TestStartStop/group/old-k8s-version (1m28s)
	TestStartStop/group/old-k8s-version/serial (1m28s)
	TestStartStop/group/old-k8s-version/serial/FirstStart (1m28s)

                                                
                                                
goroutine 2317 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008dd380, 0xc000779bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006783c0, {0x4c62020, 0x2a, 0x2a}, {0x2896712?, 0x6d806f?, 0x4c852a0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000813e00)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000813e00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000071080)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2296 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc0013bfb20?, 0x637ea5?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x622cf9?, 0xc0013bfb80?, 0x62fdd6?, 0x4d12700?, 0xc0013bfc08?, 0x622985?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7dc, {0xc001628dc7?, 0x9239, 0x6d417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0019cac88?, {0xc001628dc7?, 0x65c1be?, 0x20000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0019cac88, {0xc001628dc7, 0x9239, 0x9239})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000798da0, {0xc001628dc7?, 0xc0006056c0?, 0xfe36?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014ed770, {0x389be00, 0xc00011cc78})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x389bf40, 0xc0014ed770}, {0x389be00, 0xc00011cc78}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0013bfe78?, {0x389bf40, 0xc0014ed770})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0013bff38?, {0x389bf40?, 0xc0014ed770?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x389bf40, 0xc0014ed770}, {0x389bec0, 0xc000798da0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0014d5740?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 726
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2159 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015d1860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015d1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015d1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015d1860, 0xc00081e540)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2154
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 726 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x7ffb24764de0?, {0xc001483a80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x25c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001963950)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00193e840)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00193e840)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0015d0d00, 0xc00193e840)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0015d0d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:155 +0x3ba
testing.tRunner(0xc0015d0d00, 0x33467f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 69 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 68
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 127 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 126
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2010 [chan receive, 6 minutes]:
testing.(*T).Run(0xc000538820, {0x283a819?, 0x68f48d?}, 0xc000126870)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000538820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc000538820, 0x33468a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 722 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffb24764de0?, {0xc001831808?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x788, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001a2cba0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000cca420)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000cca420)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0015d0680, 0xc000cca420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0015d0680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0015d0680, 0x33467c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 928 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001664c10, 0x35)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x232f780?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001ba4cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001664c40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001cf7850, {0x389d240, 0xc000ccdc20}, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001cf7850, 0x3b9aca00, 0x0, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 969
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1085 [chan send, 147 minutes]:
os/exec.(*Cmd).watchCtx(0xc0004238c0, 0xc001a28420)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1084
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 147 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000c6e960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 126 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x38c0d20, 0xc0000542a0}, 0xc000911f50, 0xc000911f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x38c0d20, 0xc0000542a0}, 0x90?, 0xc000911f50, 0xc000911f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x38c0d20?, 0xc0000542a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000911fd0?, 0x7ae404?, 0xc00074cd80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 148
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 125 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000922490, 0x3c)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x232f780?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000c6e840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009224c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005040f0, {0x389d240, 0xc000674e40}, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005040f0, 0x3b9aca00, 0x0, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 148
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2297 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc00193e840, 0xc0014d5860)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 726
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2310 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc000917b20?, 0xc0001fe738?, 0xc000917b60?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000917ba8?, 0x7708f5?, 0x12?, 0xc00001e000?, 0xc000917c08?, 0x62281b?, 0x618ba6?, 0xc000cabf80?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x764, {0xc00085153a?, 0x2c6, 0xc000851400?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007f1b88?, {0xc00085153a?, 0x655170?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007f1b88, {0xc00085153a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007984f0, {0xc00085153a?, 0xc000685a40?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015b4540, {0x389be00, 0xc0007a96a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x389bf40, 0xc0015b4540}, {0x389be00, 0xc0007a96a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000917e78?, {0x389bf40, 0xc0015b4540})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000917f38?, {0x389bf40?, 0xc0015b4540?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x389bf40, 0xc0015b4540}, {0x389bec0, 0xc0007984f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0017d7860?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 722
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 148 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009224c0, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2316 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000cca160, 0xc0014d4180)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2313
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2197 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008dda00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008dda00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008dda00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008dda00, 0xc000814200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2104 [chan receive, 20 minutes]:
testing.(*T).Run(0xc000539860, {0x283a819?, 0x767333?}, 0x3346ac0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000539860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000539860, 0x33468e8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2157 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015d1520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015d1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015d1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015d1520, 0xc00081e380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2154
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2312 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000c65520, {0x283a81e?, 0x24?}, 0xc00023c840)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc000c65520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc000c65520, 0xc0017fa000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2012
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 968 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001ba4de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 930
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2198 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008ddd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008ddd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008ddd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008ddd40, 0xc000814380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 768 [IO wait, 160 minutes]:
internal/poll.runtime_pollWait(0x2206e399c08, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x62fdd6?, 0x4d12700?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0007f0020, 0xc001c5dbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0007f0008, 0x38c, {0xc0007c4000?, 0x0?, 0x2600000000?}, 0xc0000a9008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0007f0008, 0xc001c5dd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0007f0008)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc00167c2a0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00167c2a0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0008a80f0, {0x38b3dc0, 0xc00167c2a0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0008a80f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0015d0ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 765
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2154 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0015d1040, 0x3346ac0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2104
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2155 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0015d11e0, {0x283bd2c?, 0x0?}, 0xc0007a4400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015d11e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0015d11e0, 0xc00081e280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2154
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2311 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000cca420, 0xc000c7c2a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 722
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1306 [chan send, 142 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015fb760, 0xc001fafbc0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 894
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 969 [chan receive, 149 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001664c40, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 930
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2200 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000160680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000160680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000160680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000160680, 0xc000814480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2205 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c651e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c651e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c651e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c651e0, 0xc000814780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2314 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc001765b20?, 0x637ea5?, 0x4d12700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0xc001765b80?, 0x62fdd6?, 0x4d12700?, 0xc001765c08?, 0x62281b?, 0x22048a50108?, 0x35?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x630, {0xc000c98de9?, 0x217, 0x6d417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001570288?, {0xc000c98de9?, 0x65c1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001570288, {0xc000c98de9, 0x217, 0x217})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00011c7e0, {0xc000c98de9?, 0xc001765d98?, 0x68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0017fa0f0, {0x389be00, 0xc00011c830})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x389bf40, 0xc0017fa0f0}, {0x389be00, 0xc00011c830}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x389bf40, 0xc0017fa0f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x620c36?, {0x389bf40?, 0xc0017fa0f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x389bf40, 0xc0017fa0f0}, {0x389bec0, 0xc00011c7e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00145b4a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2313
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 929 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x38c0d20, 0xc0000542a0}, 0xc001c5ff50, 0xc001c5ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x38c0d20, 0xc0000542a0}, 0xa0?, 0xc001c5ff50, 0xc001c5ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x38c0d20?, 0xc0000542a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001c5ffd0?, 0x7ae404?, 0xc0020842a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 969
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2309 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc00176bb20?, 0x637ea5?, 0x4d12700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00176bba8?, 0xc00176bb80?, 0x62fdd6?, 0x4d12700?, 0xc00176bc08?, 0x622985?, 0x22048a50108?, 0x4d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5dc, {0xc000ca823d?, 0x5c3, 0x6d417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007f1688?, {0xc000ca823d?, 0x62aedd?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007f1688, {0xc000ca823d, 0x5c3, 0x5c3})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007984d8, {0xc000ca823d?, 0xc00176bd38?, 0x23c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015b4510, {0x389be00, 0xc000718190})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x389bf40, 0xc0015b4510}, {0x389be00, 0xc000718190}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00176be70?, {0x389bf40, 0xc0015b4510})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00176beb8?, {0x389bf40?, 0xc0015b4510?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x389bf40, 0xc0015b4510}, {0x389bec0, 0xc0007984d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0016e6d20?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 722
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2315 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x72676f72505c3a43?, {0xc001937b20?, 0x637ea5?, 0x4d12700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x72676f72505c3a43?, 0xc001937b80?, 0x62fdd6?, 0x4d12700?, 0xc001937c08?, 0x622985?, 0x22048a50598?, 0x65747379735c5341?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7a0, {0xc000c9893a?, 0x2c6, 0xc000c98800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001570788?, {0xc000c9893a?, 0x65c1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001570788, {0xc000c9893a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00011c810, {0xc000c9893a?, 0xc001473340?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0017fa120, {0x389be00, 0xc000798180})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x389bf40, 0xc0017fa120}, {0x389be00, 0xc000798180}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001937e78?, {0x389bf40, 0xc0017fa120})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001937f38?, {0x389bf40?, 0xc0017fa120?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x389bf40, 0xc0017fa120}, {0x389bec0, 0xc00011c810}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00199e0c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2313
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2325 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x634445?, {0xc001423b20?, 0x637ea5?, 0x4d12700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x622ad4?, 0xc001423b80?, 0x62fdd6?, 0x4d12700?, 0xc001423c08?, 0x622985?, 0x22048a50598?, 0x67?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7c0, {0xc00076dce2?, 0x31e, 0x6d417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007f0f08?, {0xc00076dce2?, 0x655170?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007f0f08, {0xc00076dce2, 0x31e, 0x31e})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000818050, {0xc00076dce2?, 0xc001423d98?, 0x1000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000cb4930, {0x389be00, 0xc00011c888})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x389bf40, 0xc000cb4930}, {0x389be00, 0xc00011c888}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x389bf40, 0xc000cb4930})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x620c36?, {0x389bf40?, 0xc000cb4930?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x389bf40, 0xc000cb4930}, {0x389bec0, 0xc000818050}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0008b4600?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2323
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2201 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000161860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000161860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000161860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000161860, 0xc000814500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 978 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 929
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2160 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015d1ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015d1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015d1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015d1ba0, 0xc00081ea80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2154
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2012 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000539040, {0x283bd2c?, 0xd18c2e2800?}, 0xc0017fa000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc000539040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc000539040, 0x33468b8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2196 [chan receive, 6 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008dd6c0, 0xc000126870)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2010
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2158 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015d16c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015d16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015d16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015d16c0, 0xc00081e3c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2154
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2313 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffb24764de0?, {0xc00176da78?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x69c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001a2c5a0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000cca160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000cca160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000c656c0, 0xc000cca160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFreshStart({0x38c0b60, 0xc0003fc000}, 0xc000c656c0, {0xc000648030, 0xc})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:80 +0x275
k8s.io/minikube/test/integration.TestPause.func1.1(0xc000c656c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc000c656c0, 0xc00023c840)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2312
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2156 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015d1380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015d1380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015d1380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015d1380, 0xc00081e340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2154
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2323 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffb24764de0?, {0xc000ca1ae0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x5f8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000bbe270)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000552840)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000552840)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000538ea0, 0xc000552840)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x38c0b60?, 0xc00064c0e0?}, 0xc000538ea0, {0xc00079c018?, 0x6666fd75?}, {0xc0182036f0?, 0xc000ca1f60?}, {0x767333?, 0x6b8d6f?}, {0xc0008a0000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000538ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000538ea0, 0xc0007a4480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2322
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2203 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c64ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c64ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c64ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c64ea0, 0xc000814600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2202 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c64d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c64d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c64d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c64d00, 0xc000814580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2326 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000552840, 0xc000106780)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2323
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2199 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0001604e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0001604e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0001604e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0001604e0, 0xc000814400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2322 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0015d1d40, {0x28453d9?, 0x60400000004?}, 0xc0007a4480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0015d1d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0015d1d40, 0xc0007a4400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2155
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2295 [syscall, locked to thread]:
syscall.SyscallN(0xc00006bce8?, {0xc00142db20?, 0xc0001fe738?, 0x10000c00151fb60?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00151fba8?, 0x38c0b60?, 0x8?, 0xe?, 0xc00142dc08?, 0x62281b?, 0x618ba6?, 0xc00071a230?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7d4, {0xc00153fa84?, 0x57c, 0x6d417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0019ca788?, {0xc00153fa84?, 0x1bb?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0019ca788, {0xc00153fa84, 0x57c, 0x57c})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000798d30, {0xc00153fa84?, 0xd?, 0x211?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014ed740, {0x389be00, 0xc000719120})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x389bf40, 0xc0014ed740}, {0x389be00, 0xc000719120}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4b879e0?, {0x389bf40, 0xc0014ed740})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x389bf40?, 0xc0014ed740?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x389bf40, 0xc0014ed740}, {0x389bec0, 0xc000798d30}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x3346868?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 726
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2204 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc00067d680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c65040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c65040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c65040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c65040, 0xc000814700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2196
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2324 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x800?, {0xc000c5bb20?, 0x0?, 0x15?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4c32660?, 0x6e33b1?, 0xc000130508?, 0xc000c5bba0?, 0xc000c5bc08?, 0x62281b?, 0x618ba6?, 0xc000cba340?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x77c, {0xc000ca9207?, 0x5f9, 0xc000ca9000?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007f0788?, {0xc000ca9207?, 0x655170?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007f0788, {0xc000ca9207, 0x5f9, 0x5f9})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000818038, {0xc000ca9207?, 0xc000c5bd98?, 0x207?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000cb4900, {0x389be00, 0xc000798548})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x389bf40, 0xc000cb4900}, {0x389be00, 0xc000798548}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x389bf40, 0xc000cb4900})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x620c36?, {0x389bf40?, 0xc000cb4900?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x389bf40, 0xc000cb4900}, {0x389bec0, 0xc000818038}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000814880?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2323
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                    

Test pass (152/198)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.26
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 1.32
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.34
12 TestDownloadOnly/v1.30.1/json-events 11.77
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.28
18 TestDownloadOnly/v1.30.1/DeleteAll 1.31
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 1.27
21 TestBinaryMirror 7.54
22 TestOffline 270.88
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.3
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.29
27 TestAddons/Setup 459.18
30 TestAddons/parallel/Ingress 70.1
31 TestAddons/parallel/InspektorGadget 27.45
32 TestAddons/parallel/MetricsServer 22.06
33 TestAddons/parallel/HelmTiller 34.67
35 TestAddons/parallel/CSI 108.14
36 TestAddons/parallel/Headlamp 37.37
37 TestAddons/parallel/CloudSpanner 21.58
38 TestAddons/parallel/LocalPath 99.79
39 TestAddons/parallel/NvidiaDevicePlugin 22.14
40 TestAddons/parallel/Yakd 6.03
41 TestAddons/parallel/Volcano 63.2
44 TestAddons/serial/GCPAuth/Namespaces 0.35
45 TestAddons/StoppedEnableDisable 57.31
47 TestCertExpiration 1104.43
48 TestDockerFlags 331.74
49 TestForceSystemdFlag 428.39
57 TestErrorSpam/start 18.53
58 TestErrorSpam/status 39.93
59 TestErrorSpam/pause 24.48
60 TestErrorSpam/unpause 24.74
61 TestErrorSpam/stop 65.32
64 TestFunctional/serial/CopySyncFile 0.03
65 TestFunctional/serial/StartWithProxy 254.8
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 134.7
68 TestFunctional/serial/KubeContext 0.15
69 TestFunctional/serial/KubectlGetPods 0.23
72 TestFunctional/serial/CacheCmd/cache/add_remote 27.83
73 TestFunctional/serial/CacheCmd/cache/add_local 12.03
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.26
75 TestFunctional/serial/CacheCmd/cache/list 0.3
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.98
77 TestFunctional/serial/CacheCmd/cache/cache_reload 39
78 TestFunctional/serial/CacheCmd/cache/delete 0.54
79 TestFunctional/serial/MinikubeKubectlCmd 0.55
81 TestFunctional/serial/ExtraConfig 134.04
82 TestFunctional/serial/ComponentHealth 0.18
83 TestFunctional/serial/LogsCmd 9.16
84 TestFunctional/serial/LogsFileCmd 11.51
85 TestFunctional/serial/InvalidService 22.68
91 TestFunctional/parallel/StatusCmd 44.6
95 TestFunctional/parallel/ServiceCmdConnect 29.95
96 TestFunctional/parallel/AddonsCmd 0.85
97 TestFunctional/parallel/PersistentVolumeClaim 40.97
99 TestFunctional/parallel/SSHCmd 25.69
100 TestFunctional/parallel/CpCmd 63.34
101 TestFunctional/parallel/MySQL 63.61
102 TestFunctional/parallel/FileSync 10.96
103 TestFunctional/parallel/CertSync 68.02
107 TestFunctional/parallel/NodeLabels 0.22
109 TestFunctional/parallel/NonActiveRuntimeDisabled 12.06
111 TestFunctional/parallel/License 3.45
112 TestFunctional/parallel/ServiceCmd/DeployApp 19.42
113 TestFunctional/parallel/Version/short 0.28
114 TestFunctional/parallel/Version/components 9.06
115 TestFunctional/parallel/ImageCommands/ImageListShort 8.1
116 TestFunctional/parallel/ImageCommands/ImageListTable 8.09
117 TestFunctional/parallel/ImageCommands/ImageListJson 8.27
118 TestFunctional/parallel/ImageCommands/ImageListYaml 8.09
119 TestFunctional/parallel/ImageCommands/ImageBuild 28.63
120 TestFunctional/parallel/ImageCommands/Setup 4.53
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 25.68
122 TestFunctional/parallel/ServiceCmd/List 14.21
123 TestFunctional/parallel/ServiceCmd/JSONOutput 14.26
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 21.6
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 32.48
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 10.44
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.72
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10.81
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/ProfileCmd/profile_not_create 12.54
142 TestFunctional/parallel/ImageCommands/ImageRemove 17.93
143 TestFunctional/parallel/ProfileCmd/profile_list 12.13
144 TestFunctional/parallel/ProfileCmd/profile_json_output 12.09
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 21.06
146 TestFunctional/parallel/DockerEnv/powershell 49.63
147 TestFunctional/parallel/UpdateContextCmd/no_changes 2.79
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.73
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.72
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10.98
151 TestFunctional/delete_addon-resizer_images 0.51
152 TestFunctional/delete_my-image_image 0.21
153 TestFunctional/delete_minikube_cached_images 0.19
157 TestMultiControlPlane/serial/StartCluster 740.72
158 TestMultiControlPlane/serial/DeployApp 12.8
160 TestMultiControlPlane/serial/AddWorkerNode 274.26
161 TestMultiControlPlane/serial/NodeLabels 0.19
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 31.16
166 TestImageBuild/serial/Setup 210.38
167 TestImageBuild/serial/NormalBuild 10.03
168 TestImageBuild/serial/BuildWithBuildArg 9.56
169 TestImageBuild/serial/BuildWithDockerIgnore 8.09
170 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.9
174 TestJSONOutput/start/Command 248.97
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 8.29
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 8.13
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 41.01
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 1.51
202 TestMainNoArgs 0.27
203 TestMinikubeProfile 556.13
206 TestMountStart/serial/StartWithMountFirst 167.43
207 TestMountStart/serial/VerifyMountFirst 10.39
208 TestMountStart/serial/StartWithMountSecond 167.3
209 TestMountStart/serial/VerifyMountSecond 10.29
210 TestMountStart/serial/DeleteFirst 29.83
211 TestMountStart/serial/VerifyMountPostDelete 10.39
212 TestMountStart/serial/Stop 33.45
213 TestMountStart/serial/RestartStopped 128.2
214 TestMountStart/serial/VerifyMountPostStop 10.16
217 TestMultiNode/serial/FreshStart2Nodes 455.47
218 TestMultiNode/serial/DeployApp2Nodes 9.63
221 TestMultiNode/serial/MultiNodeLabels 0.18
222 TestMultiNode/serial/ProfileList 12.78
224 TestMultiNode/serial/StopNode 118.86
225 TestMultiNode/serial/StartAfterStop 331.5
231 TestScheduledStopWindows 348.35
236 TestRunningBinaryUpgrade 1138.35
238 TestKubernetesUpgrade 1323.22
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.44
243 TestStoppedBinaryUpgrade/Setup 0.59
244 TestStoppedBinaryUpgrade/Upgrade 959.91
252 TestStoppedBinaryUpgrade/MinikubeLogs 10.33
x
+
TestDownloadOnly/v1.20.0/json-events (17.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-841800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-841800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.256775s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-841800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-841800: exit status 85 (300.8717ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-841800 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |          |
	|         | -p download-only-841800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:21:17
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:21:17.240095    7404 out.go:291] Setting OutFile to fd 608 ...
	I0610 10:21:17.240845    7404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:21:17.241008    7404 out.go:304] Setting ErrFile to fd 612...
	I0610 10:21:17.241008    7404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0610 10:21:17.255347    7404 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0610 10:21:17.271019    7404 out.go:298] Setting JSON to true
	I0610 10:21:17.278993    7404 start.go:129] hostinfo: {"hostname":"minikube6","uptime":14766,"bootTime":1718000111,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 10:21:17.279193    7404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 10:21:17.285852    7404 out.go:97] [download-only-841800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 10:21:17.286895    7404 notify.go:220] Checking for updates...
	W0610 10:21:17.286895    7404 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0610 10:21:17.289630    7404 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:21:17.292472    7404 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 10:21:17.294672    7404 out.go:169] MINIKUBE_LOCATION=19046
	I0610 10:21:17.297614    7404 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0610 10:21:17.301953    7404 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 10:21:17.302949    7404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:21:23.078182    7404 out.go:97] Using the hyperv driver based on user configuration
	I0610 10:21:23.078182    7404 start.go:297] selected driver: hyperv
	I0610 10:21:23.078182    7404 start.go:901] validating driver "hyperv" against <nil>
	I0610 10:21:23.078182    7404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 10:21:23.139178    7404 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0610 10:21:23.139792    7404 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 10:21:23.139792    7404 cni.go:84] Creating CNI manager for ""
	I0610 10:21:23.139792    7404 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 10:21:23.139792    7404 start.go:340] cluster config:
	{Name:download-only-841800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-841800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:21:23.140806    7404 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:21:23.143715    7404 out.go:97] Downloading VM boot image ...
	I0610 10:21:23.144654    7404 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:21:26.528259    7404 out.go:97] Starting "download-only-841800" primary control-plane node in "download-only-841800" cluster
	I0610 10:21:26.528259    7404 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 10:21:26.585096    7404 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0610 10:21:26.585219    7404 cache.go:56] Caching tarball of preloaded images
	I0610 10:21:26.585219    7404 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 10:21:26.588676    7404 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0610 10:21:26.588801    7404 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0610 10:21:26.654753    7404 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0610 10:21:29.976931    7404 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0610 10:21:29.977795    7404 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0610 10:21:31.075507    7404 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 10:21:31.076329    7404 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-841800\config.json ...
	I0610 10:21:31.076885    7404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-841800\config.json: {Name:mk1b7ae0e1be983f6bd1f1d0269eeda4dc92ef39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:21:31.077058    7404 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 10:21:31.078686    7404 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-841800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-841800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:21:34.494524   12676 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3222841s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-841800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-841800: (1.3373331s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (11.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-289600 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-289600 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv: (11.7675212s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (11.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-289600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-289600: exit status 85 (278.2658ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-841800 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | -p download-only-841800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-841800        | download-only-841800 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| start   | -o=json --download-only        | download-only-289600 | minikube6\jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | -p download-only-289600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:21:37
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:21:37.533754    6392 out.go:291] Setting OutFile to fd 596 ...
	I0610 10:21:37.534651    6392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:21:37.534651    6392 out.go:304] Setting ErrFile to fd 668...
	I0610 10:21:37.534651    6392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:21:37.559166    6392 out.go:298] Setting JSON to true
	I0610 10:21:37.562182    6392 start.go:129] hostinfo: {"hostname":"minikube6","uptime":14786,"bootTime":1718000111,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 10:21:37.562182    6392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 10:21:37.568169    6392 out.go:97] [download-only-289600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 10:21:37.569952    6392 notify.go:220] Checking for updates...
	I0610 10:21:37.572638    6392 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:21:37.575300    6392 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 10:21:37.580052    6392 out.go:169] MINIKUBE_LOCATION=19046
	I0610 10:21:37.583127    6392 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0610 10:21:37.587920    6392 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 10:21:37.588764    6392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:21:43.418341    6392 out.go:97] Using the hyperv driver based on user configuration
	I0610 10:21:43.418544    6392 start.go:297] selected driver: hyperv
	I0610 10:21:43.418544    6392 start.go:901] validating driver "hyperv" against <nil>
	I0610 10:21:43.418544    6392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 10:21:43.467385    6392 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0610 10:21:43.468611    6392 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 10:21:43.468748    6392 cni.go:84] Creating CNI manager for ""
	I0610 10:21:43.468748    6392 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:21:43.468748    6392 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:21:43.468942    6392 start.go:340] cluster config:
	{Name:download-only-289600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-289600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:21:43.468942    6392 iso.go:125] acquiring lock: {Name:mk2dffb8ecfce8309070ad455f05bfdd1e213bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:21:43.473073    6392 out.go:97] Starting "download-only-289600" primary control-plane node in "download-only-289600" cluster
	I0610 10:21:43.473073    6392 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 10:21:43.517091    6392 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 10:21:43.517091    6392 cache.go:56] Caching tarball of preloaded images
	I0610 10:21:43.517747    6392 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 10:21:43.521369    6392 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0610 10:21:43.521496    6392 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0610 10:21:43.590455    6392 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f110de85c4cd01fa5de0726fbc529387 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 10:21:46.948119    6392 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0610 10:21:46.948982    6392 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-289600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-289600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:21:49.230978    4808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (1.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3097339s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (1.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-289600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-289600: (1.2696091s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.27s)

                                                
                                    
x
+
TestBinaryMirror (7.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-282000 --alsologtostderr --binary-mirror http://127.0.0.1:60263 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-282000 --alsologtostderr --binary-mirror http://127.0.0.1:60263 --driver=hyperv: (6.6180078s)
helpers_test.go:175: Cleaning up "binary-mirror-282000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-282000
--- PASS: TestBinaryMirror (7.54s)

                                                
                                    
x
+
TestOffline (270.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-628600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-628600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m48.4433946s)
helpers_test.go:175: Cleaning up "offline-docker-628600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-628600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-628600: (42.4376106s)
--- PASS: TestOffline (270.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-987700
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-987700: exit status 85 (301.9282ms)

                                                
                                                
-- stdout --
	* Profile "addons-987700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-987700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:22:02.448674    8780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-987700
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-987700: exit status 85 (286.8631ms)

                                                
                                                
-- stdout --
	* Profile "addons-987700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-987700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:22:02.448674    9144 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/Setup (459.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-987700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-987700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m39.1765459s)
--- PASS: TestAddons/Setup (459.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (70.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-987700 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-987700 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-987700 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [51ff865e-30c4-459d-8863-f62b19f61c98] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [51ff865e-30c4-459d-8863-f62b19f61c98] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.0199656s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.9820193s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-987700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0610 10:31:49.362218   13600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-987700 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 ip: (2.6771101s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.17.154.55
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 addons disable ingress-dns --alsologtostderr -v=1: (16.3621952s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 addons disable ingress --alsologtostderr -v=1: (22.8741484s)
--- PASS: TestAddons/parallel/Ingress (70.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.45s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9p5pw" [acbf4f7a-165b-4a35-8400-23b38e260168] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0165954s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-987700
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-987700: (22.4237788s)
--- PASS: TestAddons/parallel/InspektorGadget (27.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.06s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.3614ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-nx77k" [a3843313-02f7-4747-9954-6f39d1ef0e41] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0115423s
addons_test.go:417: (dbg) Run:  kubectl --context addons-987700 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 addons disable metrics-server --alsologtostderr -v=1: (16.8098316s)
--- PASS: TestAddons/parallel/MetricsServer (22.06s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (34.67s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 5.5493ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-zkbpr" [8327d9f7-427c-494c-9979-f3aba8dd131c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0509626s
addons_test.go:475: (dbg) Run:  kubectl --context addons-987700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-987700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.2458068s)
addons_test.go:480: kubectl --context addons-987700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:475: (dbg) Run:  kubectl --context addons-987700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-987700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.9712584s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 addons disable helm-tiller --alsologtostderr -v=1: (16.8793468s)
--- PASS: TestAddons/parallel/HelmTiller (34.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (108.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 32.5217ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-987700 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-987700 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d550c3e9-33da-4045-aa92-9558b0459102] Pending
helpers_test.go:344: "task-pv-pod" [d550c3e9-33da-4045-aa92-9558b0459102] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d550c3e9-33da-4045-aa92-9558b0459102] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 25.0150942s
addons_test.go:586: (dbg) Run:  kubectl --context addons-987700 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-987700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-987700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-987700 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-987700 delete pod task-pv-pod: (2.0314232s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-987700 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-987700 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-987700 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [705450a4-eed8-46b0-be9b-80463ed35c1b] Pending
helpers_test.go:344: "task-pv-pod-restore" [705450a4-eed8-46b0-be9b-80463ed35c1b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [705450a4-eed8-46b0-be9b-80463ed35c1b] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0126771s
addons_test.go:628: (dbg) Run:  kubectl --context addons-987700 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-987700 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-987700 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.5928998s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 addons disable volumesnapshots --alsologtostderr -v=1: (16.9745524s)
--- PASS: TestAddons/parallel/CSI (108.14s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-987700 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-987700 --alsologtostderr -v=1: (17.3501413s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-vz5zz" [d0664644-7a5e-4f60-b6c3-9a837c31e0f5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-vz5zz" [d0664644-7a5e-4f60-b6c3-9a837c31e0f5] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 20.013048s
--- PASS: TestAddons/parallel/Headlamp (37.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-cjknl" [7ec0f8d2-7de7-4633-bfa2-31df0a764c0c] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012223s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-987700
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-987700: (16.5617727s)
--- PASS: TestAddons/parallel/CloudSpanner (21.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (99.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-987700 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-987700 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-987700 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f5a1e943-7745-42e4-81fd-4db3d057ab58] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f5a1e943-7745-42e4-81fd-4db3d057ab58] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f5a1e943-7745-42e4-81fd-4db3d057ab58] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 17.0122266s
addons_test.go:992: (dbg) Run:  kubectl --context addons-987700 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 ssh "cat /opt/local-path-provisioner/pvc-160300bd-a1e2-4e63-bf32-0e1d8c304ff0_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 ssh "cat /opt/local-path-provisioner/pvc-160300bd-a1e2-4e63-bf32-0e1d8c304ff0_default_test-pvc/file1": (11.1878142s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-987700 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-987700 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.8853811s)
--- PASS: TestAddons/parallel/LocalPath (99.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-k8grz" [46b6f0c1-4060-4b09-bcb4-0092c555a3a3] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0127686s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-987700
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-987700: (17.1241807s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-jgmcm" [477f2658-bfd9-465c-94fa-1d585b65722d] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0185922s
--- PASS: TestAddons/parallel/Yakd (6.03s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (63.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:897: volcano-admission stabilized in 21.9791ms
addons_test.go:905: volcano-controller stabilized in 21.9791ms
addons_test.go:889: volcano-scheduler stabilized in 21.9791ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-5b7qt" [3f8353ad-d185-4c85-85f5-82b4723bdd3a] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.0141084s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-lspnk" [6efcdfdf-d9df-45d3-b969-767662382d6e] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.0183407s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-r2mq8" [c75a2436-de44-46d8-9833-d3bcb9b7dcbb] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0167148s
addons_test.go:924: (dbg) Run:  kubectl --context addons-987700 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-987700 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-987700 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [acb71a87-d3a5-4805-80fe-afec7e71a7dc] Pending
helpers_test.go:344: "test-job-nginx-0" [acb71a87-d3a5-4805-80fe-afec7e71a7dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [acb71a87-d3a5-4805-80fe-afec7e71a7dc] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 19.0146054s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-987700 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-987700 addons disable volcano --alsologtostderr -v=1: (27.2404254s)
--- PASS: TestAddons/parallel/Volcano (63.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-987700 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-987700 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (57.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-987700
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-987700: (43.6250513s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-987700
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-987700: (5.5306517s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-987700
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-987700: (5.195198s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-987700
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-987700: (2.9602775s)
--- PASS: TestAddons/StoppedEnableDisable (57.31s)

                                                
                                    
x
+
TestCertExpiration (1104.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-150600 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-150600 --memory=2048 --cert-expiration=3m --driver=hyperv: (8m52.171302s)
E0610 13:09:28.825973    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 13:09:41.904397    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-150600 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-150600 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m41.1724883s)
helpers_test.go:175: Cleaning up "cert-expiration-150600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-150600
E0610 13:18:17.625982    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-150600: (51.0540976s)
--- PASS: TestCertExpiration (1104.43s)

                                                
                                    
x
+
TestDockerFlags (331.74s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-873700 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-873700 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (4m20.2322112s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-873700 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-873700 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.8658417s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-873700 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-873700 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (11.2008084s)
helpers_test.go:175: Cleaning up "docker-flags-873700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-873700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-873700: (49.4350323s)
--- PASS: TestDockerFlags (331.74s)

                                                
                                    
x
+
TestForceSystemdFlag (428.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-354700 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-354700 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (6m10.13829s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-354700 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-354700 ssh "docker info --format {{.CgroupDriver}}": (10.8694737s)
helpers_test.go:175: Cleaning up "force-systemd-flag-354700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-354700
E0610 12:59:40.807987    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:59:41.905334    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-354700: (47.3804883s)
--- PASS: TestForceSystemdFlag (428.39s)

                                                
                                    
x
+
TestErrorSpam/start (18.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 start --dry-run: (6.0426014s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 start --dry-run: (6.2368031s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 start --dry-run: (6.2471397s)
--- PASS: TestErrorSpam/start (18.53s)

                                                
                                    
x
+
TestErrorSpam/status (39.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 status: (13.8157303s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 status: (13.0747876s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 status: (13.0326869s)
--- PASS: TestErrorSpam/status (39.93s)

                                                
                                    
x
+
TestErrorSpam/pause (24.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 pause: (8.4660799s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 pause
E0610 10:39:41.836819    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:41.852912    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:41.867897    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:41.899867    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:41.946445    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:42.041359    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:42.215993    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:42.549766    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:43.201963    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:44.489138    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 pause: (7.9758977s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 pause
E0610 10:39:47.059540    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:39:52.187047    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 pause: (8.0317902s)
--- PASS: TestErrorSpam/pause (24.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (24.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 unpause
E0610 10:40:02.431453    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 unpause: (8.2505544s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 unpause: (8.2553836s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 unpause: (8.2284435s)
--- PASS: TestErrorSpam/unpause (24.74s)

                                                
                                    
x
+
TestErrorSpam/stop (65.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 stop
E0610 10:40:22.912888    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 stop: (41.6919264s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 stop
E0610 10:41:03.883801    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 stop: (11.9736706s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-947800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-947800 stop: (11.6499259s)
--- PASS: TestErrorSpam/stop (65.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\7548\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (254.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0610 10:42:25.821768    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:44:41.837442    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 10:45:09.670678    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-228600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m14.7897964s)
--- PASS: TestFunctional/serial/StartWithProxy (254.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (134.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228600 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-228600 --alsologtostderr -v=8: (2m14.692523s)
functional_test.go:659: soft start took 2m14.6948297s for "functional-228600" cluster.
--- PASS: TestFunctional/serial/SoftStart (134.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.15s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-228600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (27.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 cache add registry.k8s.io/pause:3.1: (9.3581631s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 cache add registry.k8s.io/pause:3.3: (9.2799325s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 cache add registry.k8s.io/pause:latest: (9.1965599s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (27.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (12.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-228600 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1027892516\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-228600 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1027892516\001: (2.7467702s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 cache add minikube-local-cache-test:functional-228600
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 cache add minikube-local-cache-test:functional-228600: (8.8175171s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 cache delete minikube-local-cache-test:functional-228600
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-228600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (12.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh sudo crictl images: (9.9785525s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh sudo docker rmi registry.k8s.io/pause:latest: (10.1479122s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (10.1342043s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:49:13.346902    6224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 cache reload: (8.6521917s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0610 10:49:41.832273    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (10.0613694s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (39.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.54s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 kubectl -- --context functional-228600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (134.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-228600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m14.0421686s)
functional_test.go:757: restart took 2m14.0421686s for "functional-228600" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (134.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-228600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (9.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 logs: (9.1596767s)
--- PASS: TestFunctional/serial/LogsCmd (9.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1970413543\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1970413543\001\logs.txt: (11.5095426s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (22.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-228600 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-228600
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-228600: exit status 115 (17.9880454s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.17.144.165:32343 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:52:57.981949    3312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-228600 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-228600 delete -f testdata\invalidsvc.yaml: (1.2599305s)
--- PASS: TestFunctional/serial/InvalidService (22.68s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (44.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 status
E0610 10:54:41.832915    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 status: (14.832511s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.7219858s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 status -o json: (15.0371799s)
--- PASS: TestFunctional/parallel/StatusCmd (44.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-228600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-228600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-5fnsp" [ddcb0b11-6f35-49ac-a27b-e6e85d9c451d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-5fnsp" [ddcb0b11-6f35-49ac-a27b-e6e85d9c451d] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0190401s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 service hello-node-connect --url: (21.4483893s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.17.144.165:32039
functional_test.go:1671: http://172.17.144.165:32039: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-5fnsp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.17.144.165:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.17.144.165:32039
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (29.95s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7ddb20ed-d760-437c-90c6-9dfe48efdb1f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0171184s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-228600 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-228600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-228600 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-228600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d650d7af-b1ae-4408-9e42-f1dc53880d56] Pending
helpers_test.go:344: "sp-pod" [d650d7af-b1ae-4408-9e42-f1dc53880d56] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d650d7af-b1ae-4408-9e42-f1dc53880d56] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.0145241s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-228600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-228600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-228600 delete -f testdata/storage-provisioner/pod.yaml: (1.5760804s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-228600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [565d87ad-5e27-45fc-953f-707f714af5f6] Pending
helpers_test.go:344: "sp-pod" [565d87ad-5e27-45fc-953f-707f714af5f6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [565d87ad-5e27-45fc-953f-707f714af5f6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0177632s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-228600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (25.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh "echo hello": (13.0693033s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh "cat /etc/hostname": (12.6249628s)
--- PASS: TestFunctional/parallel/SSHCmd (25.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (63.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.2343418s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh -n functional-228600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh -n functional-228600 "sudo cat /home/docker/cp-test.txt": (11.0701705s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 cp functional-228600:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd198327557\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 cp functional-228600:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd198327557\001\cp-test.txt: (11.2317281s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh -n functional-228600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh -n functional-228600 "sudo cat /home/docker/cp-test.txt": (11.2256046s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.5162767s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh -n functional-228600 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh -n functional-228600 "sudo cat /tmp/does/not/exist/cp-test.txt": (12.0560793s)
--- PASS: TestFunctional/parallel/CpCmd (63.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (63.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-228600 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-c8w2k" [4db47864-82a9-4721-9ad9-6c65321ef4d6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-c8w2k" [4db47864-82a9-4721-9ad9-6c65321ef4d6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 50.0212401s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;": exit status 1 (319.7491ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;": exit status 1 (348.9347ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;": exit status 1 (380.4801ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;": exit status 1 (312.5195ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;": exit status 1 (297.1659ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-228600 exec mysql-64454c8b5c-c8w2k -- mysql -ppassword -e "show databases;"
E0610 10:59:41.852922    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (63.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7548/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/test/nested/copy/7548/hosts"
E0610 10:56:05.044833    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/test/nested/copy/7548/hosts": (10.9617694s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.96s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (68.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7548.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/ssl/certs/7548.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/ssl/certs/7548.pem": (11.671064s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7548.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /usr/share/ca-certificates/7548.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /usr/share/ca-certificates/7548.pem": (11.4733104s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.7492902s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/75482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/ssl/certs/75482.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/ssl/certs/75482.pem": (11.3681966s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/75482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /usr/share/ca-certificates/75482.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /usr/share/ca-certificates/75482.pem": (11.1632918s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.5952905s)
--- PASS: TestFunctional/parallel/CertSync (68.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-228600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228600 ssh "sudo systemctl is-active crio": exit status 1 (12.0560259s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:53:18.974950   10548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.4241107s)
--- PASS: TestFunctional/parallel/License (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-228600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-228600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-txvff" [0bc3d4da-7fc8-4b74-b5ba-e82766ac2075] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-txvff" [0bc3d4da-7fc8-4b74-b5ba-e82766ac2075] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.01032s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (9.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 version -o=json --components: (9.0549954s)
--- PASS: TestFunctional/parallel/Version/components (9.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls --format short --alsologtostderr: (8.1036701s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-228600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-228600
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-228600
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228600 image ls --format short --alsologtostderr:
W0610 10:56:31.374524    9204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0610 10:56:31.458408    9204 out.go:291] Setting OutFile to fd 952 ...
I0610 10:56:31.458408    9204 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:31.458408    9204 out.go:304] Setting ErrFile to fd 592...
I0610 10:56:31.459397    9204 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:31.476391    9204 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:31.477400    9204 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:31.477400    9204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:56:33.884216    9204 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:56:33.884325    9204 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:33.899307    9204 ssh_runner.go:195] Run: systemctl --version
I0610 10:56:33.899307    9204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:56:36.330929    9204 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:56:36.330994    9204 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:36.331116    9204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
I0610 10:56:39.171885    9204 main.go:141] libmachine: [stdout =====>] : 172.17.144.165

                                                
                                                
I0610 10:56:39.172621    9204 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:39.172934    9204 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
I0610 10:56:39.272352    9204 ssh_runner.go:235] Completed: systemctl --version: (5.3730017s)
I0610 10:56:39.282035    9204 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls --format table --alsologtostderr: (8.0923116s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-228600 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-228600 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-228600 | 76adc80fde575 | 30B    |
| docker.io/library/nginx                     | alpine            | 70ea0d8cc5300 | 48.3MB |
| docker.io/library/nginx                     | latest            | 4f67c83422ec7 | 188MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 25a1387cdab82 | 111MB  |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 747097150317f | 84.7MB |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 91be940803172 | 117MB  |
| registry.k8s.io/kube-scheduler              | v1.30.1           | a52dc94f0a912 | 62MB   |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228600 image ls --format table --alsologtostderr:
W0610 10:56:39.476778    3384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0610 10:56:39.561780    3384 out.go:291] Setting OutFile to fd 860 ...
I0610 10:56:39.562779    3384 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:39.562779    3384 out.go:304] Setting ErrFile to fd 876...
I0610 10:56:39.562779    3384 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:39.585091    3384 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:39.586079    3384 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:39.586079    3384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:56:41.957007    3384 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:56:41.957007    3384 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:41.971650    3384 ssh_runner.go:195] Run: systemctl --version
I0610 10:56:41.971650    3384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:56:44.394999    3384 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:56:44.394999    3384 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:44.396011    3384 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
I0610 10:56:47.250589    3384 main.go:141] libmachine: [stdout =====>] : 172.17.144.165

                                                
                                                
I0610 10:56:47.250722    3384 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:47.251299    3384 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
I0610 10:56:47.359945    3384 ssh_runner.go:235] Completed: systemctl --version: (5.3882507s)
I0610 10:56:47.370001    3384 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (8.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls --format json --alsologtostderr: (8.2703155s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-228600 image ls --format json --alsologtostderr:
[{"id":"76adc80fde5754aed704449772b273497508f4ea4aac618fb21c9ac467b3dae9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-228600"],"size":"30"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-228600"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117000000"},{"id":"da86e6ba6ca1
97bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"62000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":[],"repoTags":["regi
stry.k8s.io/kube-controller-manager:v1.30.1"],"size":"111000000"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"84700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228600 image ls --format json --alsologtostderr:
W0610 10:56:31.839313    3880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0610 10:56:31.923060    3880 out.go:291] Setting OutFile to fd 876 ...
I0610 10:56:31.942245    3880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:31.943263    3880 out.go:304] Setting ErrFile to fd 904...
I0610 10:56:31.943263    3880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:31.959232    3880 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:31.960237    3880 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:31.961258    3880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:56:34.371577    3880 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:56:34.371577    3880 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:34.385550    3880 ssh_runner.go:195] Run: systemctl --version
I0610 10:56:34.385550    3880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:56:36.883227    3880 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:56:36.883227    3880 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:36.883227    3880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
I0610 10:56:39.775947    3880 main.go:141] libmachine: [stdout =====>] : 172.17.144.165

                                                
                                                
I0610 10:56:39.776713    3880 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:39.776713    3880 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
I0610 10:56:39.889738    3880 ssh_runner.go:235] Completed: systemctl --version: (5.5040393s)
I0610 10:56:39.898844    3880 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (8.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls --format yaml --alsologtostderr: (8.0873885s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-228600 image ls --format yaml --alsologtostderr:
- id: 76adc80fde5754aed704449772b273497508f4ea4aac618fb21c9ac467b3dae9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-228600
size: "30"
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "84700000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "62000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117000000"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "111000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-228600
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228600 image ls --format yaml --alsologtostderr:
W0610 10:56:40.114112    7804 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0610 10:56:40.201380    7804 out.go:291] Setting OutFile to fd 612 ...
I0610 10:56:40.201788    7804 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:40.201788    7804 out.go:304] Setting ErrFile to fd 760...
I0610 10:56:40.201788    7804 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:40.219613    7804 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:40.219613    7804 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:40.222079    7804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:56:42.619397    7804 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:56:42.619473    7804 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:42.633938    7804 ssh_runner.go:195] Run: systemctl --version
I0610 10:56:42.633938    7804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:56:45.032062    7804 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:56:45.032722    7804 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:45.032959    7804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
I0610 10:56:47.893474    7804 main.go:141] libmachine: [stdout =====>] : 172.17.144.165

                                                
                                                
I0610 10:56:47.893700    7804 main.go:141] libmachine: [stderr =====>] : 
I0610 10:56:47.893700    7804 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
I0610 10:56:47.991912    7804 ssh_runner.go:235] Completed: systemctl --version: (5.3579299s)
I0610 10:56:48.001842    7804 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228600 ssh pgrep buildkitd: exit status 1 (10.2685988s)

                                                
                                                
** stderr ** 
	W0610 10:56:47.583259    1812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image build -t localhost/my-image:functional-228600 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image build -t localhost/my-image:functional-228600 testdata\build --alsologtostderr: (10.4896966s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-228600 image build -t localhost/my-image:functional-228600 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 7929f979746a
---> Removed intermediate container 7929f979746a
---> 27dd14349d4a
Step 3/3 : ADD content.txt /
---> a5d05b2f7975
Successfully built a5d05b2f7975
Successfully tagged localhost/my-image:functional-228600
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228600 image build -t localhost/my-image:functional-228600 testdata\build --alsologtostderr:
W0610 10:56:57.826245    6892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0610 10:56:57.918378    6892 out.go:291] Setting OutFile to fd 672 ...
I0610 10:56:57.938266    6892 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:57.938311    6892 out.go:304] Setting ErrFile to fd 864...
I0610 10:56:57.938357    6892 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:56:57.959233    6892 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:57.978362    6892 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 10:56:57.979781    6892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:57:00.309772    6892 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:57:00.310054    6892 main.go:141] libmachine: [stderr =====>] : 
I0610 10:57:00.326959    6892 ssh_runner.go:195] Run: systemctl --version
I0610 10:57:00.326959    6892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228600 ).state
I0610 10:57:02.731153    6892 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0610 10:57:02.731153    6892 main.go:141] libmachine: [stderr =====>] : 
I0610 10:57:02.731221    6892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228600 ).networkadapters[0]).ipaddresses[0]
I0610 10:57:05.514291    6892 main.go:141] libmachine: [stdout =====>] : 172.17.144.165

                                                
                                                
I0610 10:57:05.514554    6892 main.go:141] libmachine: [stderr =====>] : 
I0610 10:57:05.515087    6892 sshutil.go:53] new ssh client: &{IP:172.17.144.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228600\id_rsa Username:docker}
I0610 10:57:05.622097    6892 ssh_runner.go:235] Completed: systemctl --version: (5.2950951s)
I0610 10:57:05.622097    6892 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2351086811.tar
I0610 10:57:05.635370    6892 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0610 10:57:05.673159    6892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2351086811.tar
I0610 10:57:05.681787    6892 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2351086811.tar: stat -c "%s %y" /var/lib/minikube/build/build.2351086811.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2351086811.tar': No such file or directory
I0610 10:57:05.682022    6892 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2351086811.tar --> /var/lib/minikube/build/build.2351086811.tar (3072 bytes)
I0610 10:57:05.747347    6892 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2351086811
I0610 10:57:05.781056    6892 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2351086811 -xf /var/lib/minikube/build/build.2351086811.tar
I0610 10:57:05.800594    6892 docker.go:360] Building image: /var/lib/minikube/build/build.2351086811
I0610 10:57:05.810524    6892 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-228600 /var/lib/minikube/build/build.2351086811
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0610 10:57:08.115767    6892 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-228600 /var/lib/minikube/build/build.2351086811: (2.3052244s)
I0610 10:57:08.130020    6892 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2351086811
I0610 10:57:08.158989    6892 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2351086811.tar
I0610 10:57:08.181404    6892 build_images.go:217] Built localhost/my-image:functional-228600 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2351086811.tar
I0610 10:57:08.181404    6892 build_images.go:133] succeeded building to: functional-228600
I0610 10:57:08.181404    6892 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls: (7.8688759s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.254424s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-228600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image load --daemon gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image load --daemon gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr: (17.1504247s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls: (8.5343891s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 service list: (14.2106574s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 service list -o json: (14.259522s)
functional_test.go:1490: Took "14.259522s" to run "out/minikube-windows-amd64.exe -p functional-228600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image load --daemon gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image load --daemon gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr: (13.0402004s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls: (8.5636535s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (32.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.0601825s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-228600
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image load --daemon gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image load --daemon gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr: (18.864166s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls: (9.2921202s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (32.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-228600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-228600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-228600 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-228600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 5248: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 1948: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-228600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-228600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [31f2b8b9-e4fa-4f2a-bd9d-1870207076c7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [31f2b8b9-e4fa-4f2a-bd9d-1870207076c7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0159787s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image save gcr.io/google-containers/addon-resizer:functional-228600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image save gcr.io/google-containers/addon-resizer:functional-228600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.8071229s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-228600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 10496: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.0338432s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image rm gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image rm gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr: (8.8426825s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls: (9.0891866s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.93s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (11.8622514s)
functional_test.go:1311: Took "11.8622514s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "263.7837ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.8074847s)
functional_test.go:1362: Took "11.8097509s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "277.1471ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (21.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.8484528s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image ls: (9.2127781s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (21.06s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (49.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-228600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-228600"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-228600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-228600": (32.9634002s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-228600 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-228600 docker-env | Invoke-Expression ; docker images": (16.6477867s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (49.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 update-context --alsologtostderr -v=2: (2.7913976s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 update-context --alsologtostderr -v=2: (2.7284539s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 update-context --alsologtostderr -v=2: (2.717438s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-228600
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228600 image save --daemon gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228600 image save --daemon gcr.io/google-containers/addon-resizer:functional-228600 --alsologtostderr: (10.5319654s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-228600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.98s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.51s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-228600
--- PASS: TestFunctional/delete_addon-resizer_images (0.51s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.21s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-228600
--- PASS: TestFunctional/delete_my-image_image (0.21s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-228600
--- PASS: TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (740.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-368100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0610 11:03:17.563251    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:17.573630    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:17.603958    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:17.643298    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:17.686294    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:17.775879    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:17.944039    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:18.267262    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:18.909047    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:20.201817    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:22.762501    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:27.897439    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:38.144275    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:03:58.625800    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:04:39.587115    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:04:41.842774    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 11:06:01.510604    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:08:17.557851    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:08:45.356772    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:09:41.845573    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 11:12:45.061889    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 11:13:17.556643    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-368100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m41.0896516s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 status -v=7 --alsologtostderr: (39.62436s)
--- PASS: TestMultiControlPlane/serial/StartCluster (740.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-368100 -- rollout status deployment/busybox: (3.6499148s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-9tfq9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-9tfq9 -- nslookup kubernetes.io: (1.8142631s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-kff2v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-kff2v -- nslookup kubernetes.io: (1.7138775s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-s49nb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-9tfq9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-kff2v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-s49nb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-9tfq9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-kff2v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-368100 -- exec busybox-fc5497c4f-s49nb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (274.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-368100 -v=7 --alsologtostderr
E0610 11:18:17.566261    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-368100 -v=7 --alsologtostderr: (3m41.3210425s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-368100 status -v=7 --alsologtostderr
E0610 11:19:40.727637    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:19:41.853612    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-368100 status -v=7 --alsologtostderr: (52.9371799s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (274.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-368100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (31.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (31.1618615s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (31.16s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (210.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-115700 --driver=hyperv
E0610 11:36:20.739721    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:38:17.570810    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-115700 --driver=hyperv: (3m30.3710658s)
--- PASS: TestImageBuild/serial/Setup (210.38s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-115700
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-115700: (10.0295071s)
--- PASS: TestImageBuild/serial/NormalBuild (10.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-115700
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-115700: (9.5509719s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-115700
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-115700: (8.0777268s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-115700
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-115700: (7.8865637s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (248.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-126600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0610 11:43:17.581139    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-126600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m8.9523874s)
--- PASS: TestJSONOutput/start/Command (248.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.29s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-126600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-126600 --output=json --user=testUser: (8.2841012s)
--- PASS: TestJSONOutput/pause/Command (8.29s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.13s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-126600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-126600 --output=json --user=testUser: (8.122371s)
--- PASS: TestJSONOutput/unpause/Command (8.13s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (41.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-126600 --output=json --user=testUser
E0610 11:44:41.858555    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-126600 --output=json --user=testUser: (41.0005266s)
--- PASS: TestJSONOutput/stop/Command (41.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.51s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-026700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-026700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (295.7795ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e095ca2-32ab-4b96-9b82-f8301e637a58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-026700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e53d3bfc-3ebb-4177-8973-d4fc53e9b035","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"b8294f0d-cf5d-47c1-8518-17b2924e2f5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6b362e23-bf75-4b3a-9e89-e4684e344eea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"08cf0252-0a2b-4ac1-bdca-8739a7374630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19046"}}
	{"specversion":"1.0","id":"041d4aed-0d8a-4ae6-9c68-3cce235eccd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dfcb3954-dd59-4625-af03-152d8514a1f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 11:45:24.048879    3164 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-026700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-026700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-026700: (1.2022879s)
--- PASS: TestErrorJSONOutput (1.51s)

                                                
                                    
x
+
TestMainNoArgs (0.27s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.27s)

                                                
                                    
x
+
TestMinikubeProfile (556.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-398600 --driver=hyperv
E0610 11:46:05.096296    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 11:48:17.582037    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-398600 --driver=hyperv: (3m27.5475174s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-398600 --driver=hyperv
E0610 11:49:41.862265    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-398600 --driver=hyperv: (3m32.5081301s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-398600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.1284498s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-398600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0610 11:53:00.750361    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.017165s)
helpers_test.go:175: Cleaning up "second-398600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-398600
E0610 11:53:17.577571    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-398600: (47.3138391s)
helpers_test.go:175: Cleaning up "first-398600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-398600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-398600: (41.6870612s)
--- PASS: TestMinikubeProfile (556.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (167.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-314000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-314000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m46.4224637s)
--- PASS: TestMountStart/serial/StartWithMountFirst (167.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (10.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-314000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-314000 ssh -- ls /minikube-host: (10.3885546s)
--- PASS: TestMountStart/serial/VerifyMountFirst (10.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (167.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-314000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0610 11:58:17.588723    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 11:59:41.867546    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-314000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m46.2971163s)
--- PASS: TestMountStart/serial/StartWithMountSecond (167.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (10.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-314000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-314000 ssh -- ls /minikube-host: (10.2934434s)
--- PASS: TestMountStart/serial/VerifyMountSecond (10.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (29.83s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-314000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-314000 --alsologtostderr -v=5: (29.833624s)
--- PASS: TestMountStart/serial/DeleteFirst (29.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (10.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-314000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-314000 ssh -- ls /minikube-host: (10.3884358s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (10.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (33.45s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-314000
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-314000: (33.4493655s)
--- PASS: TestMountStart/serial/Stop (33.45s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (128.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-314000
E0610 12:02:45.110558    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 12:03:17.590849    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-314000: (2m7.1933099s)
--- PASS: TestMountStart/serial/RestartStopped (128.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (10.16s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-314000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-314000 ssh -- ls /minikube-host: (10.1633295s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (455.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-813300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0610 12:08:17.588841    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:09:40.765074    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:09:41.878667    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-813300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (7m9.5911573s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-813300 status --alsologtostderr: (25.8747241s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (455.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- rollout status deployment/busybox: (3.4261291s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-czxmt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-czxmt -- nslookup kubernetes.io: (1.8624317s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-z28tq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-czxmt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-z28tq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-czxmt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-813300 -- exec busybox-fc5497c4f-z28tq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-813300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (12.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.7783058s)
--- PASS: TestMultiNode/serial/ProfileList (12.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (118.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 node stop m03
E0610 12:19:25.126884    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
E0610 12:19:41.879793    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-813300 node stop m03: (1m2.5703937s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-813300 status: exit status 7 (28.1581976s)

                                                
                                                
-- stdout --
	multinode-813300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-813300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-813300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:20:26.846082    4188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-813300 status --alsologtostderr: exit status 7 (28.126469s)

                                                
                                                
-- stdout --
	multinode-813300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-813300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-813300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:20:55.000938    6260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0610 12:20:55.091905    6260 out.go:291] Setting OutFile to fd 860 ...
	I0610 12:20:55.092923    6260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:20:55.092923    6260 out.go:304] Setting ErrFile to fd 984...
	I0610 12:20:55.092923    6260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:20:55.113017    6260 out.go:298] Setting JSON to false
	I0610 12:20:55.113224    6260 mustload.go:65] Loading cluster: multinode-813300
	I0610 12:20:55.113303    6260 notify.go:220] Checking for updates...
	I0610 12:20:55.114021    6260 config.go:182] Loaded profile config "multinode-813300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 12:20:55.114171    6260 status.go:255] checking status of multinode-813300 ...
	I0610 12:20:55.115153    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:20:57.439624    6260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:20:57.439928    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:20:57.440134    6260 status.go:330] multinode-813300 host status = "Running" (err=<nil>)
	I0610 12:20:57.440134    6260 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:20:57.440872    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:20:59.770169    6260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:20:59.770169    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:20:59.770887    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:21:02.631057    6260 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:21:02.631387    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:21:02.631453    6260 host.go:66] Checking if "multinode-813300" exists ...
	I0610 12:21:02.644830    6260 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 12:21:02.644830    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300 ).state
	I0610 12:21:04.944598    6260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:21:04.944598    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:21:04.945423    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300 ).networkadapters[0]).ipaddresses[0]
	I0610 12:21:07.742105    6260 main.go:141] libmachine: [stdout =====>] : 172.17.159.171
	
	I0610 12:21:07.742175    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:21:07.742839    6260 sshutil.go:53] new ssh client: &{IP:172.17.159.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300\id_rsa Username:docker}
	I0610 12:21:07.838325    6260 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1934526s)
	I0610 12:21:07.851951    6260 ssh_runner.go:195] Run: systemctl --version
	I0610 12:21:07.874474    6260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:21:07.906840    6260 kubeconfig.go:125] found "multinode-813300" server: "https://172.17.159.171:8443"
	I0610 12:21:07.906895    6260 api_server.go:166] Checking apiserver status ...
	I0610 12:21:07.920492    6260 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:21:07.966143    6260 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup
	W0610 12:21:07.985968    6260 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 12:21:07.999568    6260 ssh_runner.go:195] Run: ls
	I0610 12:21:08.007589    6260 api_server.go:253] Checking apiserver healthz at https://172.17.159.171:8443/healthz ...
	I0610 12:21:08.015223    6260 api_server.go:279] https://172.17.159.171:8443/healthz returned 200:
	ok
	I0610 12:21:08.015223    6260 status.go:422] multinode-813300 apiserver status = Running (err=<nil>)
	I0610 12:21:08.015223    6260 status.go:257] multinode-813300 status: &{Name:multinode-813300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 12:21:08.015223    6260 status.go:255] checking status of multinode-813300-m02 ...
	I0610 12:21:08.016400    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:21:10.366908    6260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:21:10.366908    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:21:10.366908    6260 status.go:330] multinode-813300-m02 host status = "Running" (err=<nil>)
	I0610 12:21:10.366908    6260 host.go:66] Checking if "multinode-813300-m02" exists ...
	I0610 12:21:10.367814    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:21:12.704245    6260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:21:12.704487    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:21:12.704683    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:21:15.487444    6260 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:21:15.487444    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:21:15.487444    6260 host.go:66] Checking if "multinode-813300-m02" exists ...
	I0610 12:21:15.499721    6260 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 12:21:15.499721    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m02 ).state
	I0610 12:21:17.800644    6260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0610 12:21:17.800644    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:21:17.800644    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-813300-m02 ).networkadapters[0]).ipaddresses[0]
	I0610 12:21:20.560920    6260 main.go:141] libmachine: [stdout =====>] : 172.17.151.128
	
	I0610 12:21:20.561480    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:21:20.561480    6260 sshutil.go:53] new ssh client: &{IP:172.17.151.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-813300-m02\id_rsa Username:docker}
	I0610 12:21:20.665777    6260 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1654166s)
	I0610 12:21:20.678885    6260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:21:20.708789    6260 status.go:257] multinode-813300-m02 status: &{Name:multinode-813300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 12:21:20.708789    6260 status.go:255] checking status of multinode-813300-m03 ...
	I0610 12:21:20.709843    6260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-813300-m03 ).state
	I0610 12:21:22.986480    6260 main.go:141] libmachine: [stdout =====>] : Off
	
	I0610 12:21:22.986480    6260 main.go:141] libmachine: [stderr =====>] : 
	I0610 12:21:22.986843    6260 status.go:330] multinode-813300-m03 host status = "Stopped" (err=<nil>)
	I0610 12:21:22.986843    6260 status.go:343] host is not running, skipping remaining checks
	I0610 12:21:22.986843    6260 status.go:257] multinode-813300-m03 status: &{Name:multinode-813300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (118.86s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (331.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 node start m03 -v=7 --alsologtostderr
E0610 12:23:17.595503    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:24:41.886454    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-813300 node start m03 -v=7 --alsologtostderr: (4m52.4711815s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-813300 status -v=7 --alsologtostderr
E0610 12:26:20.777463    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-813300 status -v=7 --alsologtostderr: (38.8378062s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (331.50s)

                                                
                                    
x
+
TestScheduledStopWindows (348.35s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-811900 --memory=2048 --driver=hyperv
E0610 12:48:17.615574    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:49:41.893021    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-811900 --memory=2048 --driver=hyperv: (3m31.1009925s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-811900 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-811900 --schedule 5m: (11.6448057s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-811900 -n scheduled-stop-811900
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-811900 -n scheduled-stop-811900: exit status 1 (10.0250171s)

                                                
                                                
** stderr ** 
	W0610 12:51:09.805012    6652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-811900 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-811900 -- sudo systemctl show minikube-scheduled-stop --no-page: (10.443341s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-811900 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-811900 --schedule 5s: (11.5047262s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-811900
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-811900: exit status 7 (2.576425s)

                                                
                                                
-- stdout --
	scheduled-stop-811900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:52:41.786002    6876 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-811900 -n scheduled-stop-811900
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-811900 -n scheduled-stop-811900: exit status 7 (2.5586487s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:52:44.362558    2440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-811900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-811900
E0610 12:52:48.811430    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-811900: (28.479716s)
--- PASS: TestScheduledStopWindows (348.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1138.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.137117342.exe start -p running-upgrade-623500 --memory=2200 --vm-driver=hyperv
E0610 12:53:17.608714    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
E0610 12:54:41.890291    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.137117342.exe start -p running-upgrade-623500 --memory=2200 --vm-driver=hyperv: (8m38.3140969s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-623500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0610 13:03:17.608389    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-623500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (9m7.2792075s)
helpers_test.go:175: Cleaning up "running-upgrade-623500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-623500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-623500: (1m11.9330802s)
--- PASS: TestRunningBinaryUpgrade (1138.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (1323.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-758600 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-758600 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (6m37.3628095s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-758600
E0610 13:04:41.904073    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-758600: (42.4338578s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-758600 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-758600 status --format={{.Host}}: exit status 7 (2.6714255s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 13:05:06.100739   12852 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-758600 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-758600 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: (7m22.4414359s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-758600 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-758600 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-758600 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (329.0734ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-758600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 13:12:31.422443    5516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-758600
	    minikube start -p kubernetes-upgrade-758600 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7586002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-758600 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-758600 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
E0610 13:13:17.625723    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-758600 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: (6m35.2847635s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-758600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-758600
E0610 13:19:41.904064    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-987700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-758600: (42.4769406s)
--- PASS: TestKubernetesUpgrade (1323.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-157300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-157300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (441.2193ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-157300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 12:53:15.447154   12876 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (959.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2659160833.exe start -p stopped-upgrade-054400 --memory=2200 --vm-driver=hyperv
E0610 12:58:17.615254    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2659160833.exe start -p stopped-upgrade-054400 --memory=2200 --vm-driver=hyperv: (8m36.0371812s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2659160833.exe -p stopped-upgrade-054400 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2659160833.exe -p stopped-upgrade-054400 stop: (37.3715958s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-054400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0610 13:08:17.615244    7548 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-228600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-054400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m46.4970437s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (959.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-054400
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-054400: (10.328349s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.33s)

                                                
                                    

Test skip (30/198)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-228600 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-228600 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 10224: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-228600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0356051s)

                                                
                                                
-- stdout --
	* [functional-228600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:55:45.774513    1744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0610 10:55:45.852523    1744 out.go:291] Setting OutFile to fd 984 ...
	I0610 10:55:45.853498    1744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:55:45.853498    1744 out.go:304] Setting ErrFile to fd 788...
	I0610 10:55:45.853498    1744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:55:45.876514    1744 out.go:298] Setting JSON to false
	I0610 10:55:45.879512    1744 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16834,"bootTime":1718000111,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 10:55:45.879512    1744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 10:55:45.883514    1744 out.go:177] * [functional-228600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 10:55:45.887514    1744 notify.go:220] Checking for updates...
	I0610 10:55:45.890516    1744 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:55:45.895513    1744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:55:45.898506    1744 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 10:55:45.900517    1744 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:55:45.902533    1744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:55:45.906513    1744 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 10:55:45.907507    1744 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-228600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0311663s)

                                                
                                                
-- stdout --
	* [functional-228600] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0610 10:55:50.850167   13140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0610 10:55:50.953086   13140 out.go:291] Setting OutFile to fd 788 ...
	I0610 10:55:50.954091   13140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:55:50.954091   13140 out.go:304] Setting ErrFile to fd 860...
	I0610 10:55:50.954091   13140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:55:50.981139   13140 out.go:298] Setting JSON to false
	I0610 10:55:50.984648   13140 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16839,"bootTime":1718000111,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0610 10:55:50.984648   13140 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 10:55:50.990664   13140 out.go:177] * [functional-228600] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0610 10:55:50.993543   13140 notify.go:220] Checking for updates...
	I0610 10:55:50.995948   13140 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0610 10:55:50.998628   13140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:55:51.001628   13140 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0610 10:55:51.003637   13140 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:55:51.006718   13140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:55:51.013277   13140 config.go:182] Loaded profile config "functional-228600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 10:55:51.014334   13140 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard